diff --git a/notebook/band-theory/density_of_states.ipynb b/notebook/band-theory/density_of_states.ipynb index 66931d4..eccfeba 100644 --- a/notebook/band-theory/density_of_states.ipynb +++ b/notebook/band-theory/density_of_states.ipynb @@ -28,8 +28,8 @@ "source": [ "## **Goals**\n", " \n", - "* Familiarize yourself with various methods to calculate the electronic density of states.\n", - "* Examine the resulting DOS and compare the accuracy and computational cost of various methods to compute it" + "* Familiarize yourself with various numerical methods employed to calculate the electronic density of states.\n", + "* Examine the resulting DOS and compare the accuracy and computational cost of the various methods." ] }, { @@ -47,13 +47,12 @@ "source": [ "## **Tasks and exercises**\n", "\n", - "1. Investigate the influence of the number of k-points o\n", - ".n the resulting DOS.\n", + "1. Investigate the influence of the number of k-points on the resulting DOS.\n", "
\n", " Solution\n", " In the right panel, the blue line is the analytical solution for the DOS. \n", " By choosing different numbers of kpoints via the \"Number of kpoints slider\", we can investigate how \n", - " the quality of the calculated results varies with the density of kpoint mesh. You will observe that the numerical results converge to the analytical result with increasing number of kpoints. This can be attributed to the fact that the DOS can be interpreted as a probability density of electronic states as a function of energy. Since energy is generally related to the k vector magnitude, the quality with which we resolve the range of energy eigenvalues is in turn directly controlled by how fine our sampling of the kpoint mesh is.\n", + " the quality of the calculated results varies with the density of the k-point mesh. You will observe that the numerical results converge to the analytical result with increasing number of k-points. This can be attributed to the fact that the DOS can be interpreted as a probability density of electronic states as a function of energy. Since energy is generally related to the k-vector magnitude, the quality with which we resolve the range of energy eigenvalues is in turn directly controlled by how fine our sampling of the k-point mesh is.\n", "
\n", "\n", "2. Which method gives most accurate results? Which method is fastest and why?\n", @@ -61,8 +60,8 @@ "
\n", " Solution\n", " The linear tetrahedra interpolation (LTI) method is an accurate numerical approach, \n", - " which interpolates the 3D kpoints grid. The LTI method can give much better \n", - " results rather than a simple histogram. Gaussian smearing makes the \n", + " which interpolates the 3D kpoints grid. The LTI method can yield much better \n", + " results compared to a simple histogram. Gaussian smearing makes the \n", " histogram plot much smoother, which is closer to the analytical \n", " solution. The histogram method is a simple statistic of the eigenvalues, \n", " which should be the fastest to compute, but which shall give results with poorer resolution.\n", @@ -94,7 +93,7 @@ }, { "cell_type": "code", - "execution_count": 1, + "execution_count": 29, "metadata": {}, "outputs": [], "source": [ @@ -121,7 +120,7 @@ }, { "cell_type": "code", - "execution_count": 2, + "execution_count": 30, "metadata": {}, "outputs": [], "source": [ @@ -139,7 +138,7 @@ }, { "cell_type": "code", - "execution_count": 3, + "execution_count": 31, "metadata": {}, "outputs": [], "source": [ @@ -160,7 +159,7 @@ }, { "cell_type": "code", - "execution_count": 4, + "execution_count": 32, "metadata": {}, "outputs": [], "source": [ @@ -202,6 +201,8 @@ " \n", " shape = (nkpt.value, nkpt.value, nkpt.value)\n", " kpts = np.dot(monkhorst_pack(shape), G).reshape(shape + (3,))\n", + " # with output:\n", + " # print(kpts)\n", " kpts = kpts.reshape(nkpt.value**3, 3)\n", "\n", " for i in range(-n, n+1):\n", @@ -215,7 +216,7 @@ }, { "cell_type": "code", - "execution_count": 5, + "execution_count": 33, "metadata": {}, "outputs": [], "source": [ @@ -233,7 +234,7 @@ }, { "cell_type": "code", - "execution_count": 6, + "execution_count": 34, "metadata": {}, "outputs": [], "source": [ @@ -447,7 +448,7 @@ }, { "cell_type": "code", - "execution_count": 7, + "execution_count": 35, "metadata": { "tags": [] }, @@ -603,11 +604,11 @@ " # pdf_vals = np.array([norm(eigs.ravel(), scale=gstd).pdf(x) for x in gx])\n", " # gy = np.sum(pdf_vals, axis=-1)\n", " end = time.time()\n", - " with output:\n", - " print(\"(eigs.shape)={}. gy.shape={}\".format(eigs.shape,gy.shape))\n", + "# with output:\n", + "# print(\"(eigs.shape)={}. gy.shape={}\".format(eigs.shape,gy.shape))\n", "\n", - " print(\"time taken =\")\n", - " print(end - start)\n", + "# print(\"time taken =\")\n", + "# print(end - start)\n", " \n", " gy = gy/np.size(eigs)*np.shape(eigs)[-1]\n", " lgas, = ax_dos.plot(gy, gx, 'k--', label=\"Gaussian smearing\")\n", @@ -710,7 +711,7 @@ }, { "cell_type": "code", - "execution_count": 8, + "execution_count": 36, "metadata": {}, "outputs": [], "source": [ @@ -745,10 +746,10 @@ "# zaxis = dict(title=r'kz', titlefont_color='white'))))\n", "\n", "\n", - "# # def update_kpts_fig(c):\n", - "# # \"\"\"Update the kpoints plot when tuning the kpoints slider.\n", - "# \"\"\"\n", - "# kpts = _compute_total_kpts(G)\n", + "def update_kpts_fig(c):\n", + " \"\"\"Update the kpoints plot when tuning the kpoints slider.\n", + " \"\"\"\n", + " kpts = _compute_total_kpts(G)\n", " \n", "# with figkpts.batch_update():\n", "# figkpts.data[1].x = kpts[:, 0]\n", @@ -760,29 +761,29 @@ "# else:\n", "# figkpts.data[1].marker['size'] = 1.5\n", "\n", - "# def half_sphere():\n", - "# \"\"\"Only show half of the isosurface.\n", - "# \"\"\"\n", - "# X, Y, Z = np.mgrid[-6:6:40j, 0:6:40j, -6:6:40j]\n", - "# values = 0.5*(X * X + Y * Y + Z * Z)\n", - "# figkpts.data[0].x = X.flatten()\n", - "# figkpts.data[0].y = Y.flatten()\n", - "# figkpts.data[0].z = Z.flatten()\n", - "# figkpts.data[0].value = values.flatten()\n", + "def half_sphere():\n", + " \"\"\"Only show half of the isosurface.\n", + " \"\"\"\n", + " X, Y, Z = np.mgrid[-6:6:40j, 0:6:40j, -6:6:40j]\n", + " values = 0.5*(X * X + Y * Y + Z * Z)\n", + " figkpts.data[0].x = X.flatten()\n", + " figkpts.data[0].y = Y.flatten()\n", + " figkpts.data[0].z = Z.flatten()\n", + " figkpts.data[0].value = values.flatten()\n", " \n", "\n", - "# nkpt.observe(update_kpts_fig, names=\"value\")" + "nkpt.observe(update_kpts_fig, names=\"value\")" ] }, { "cell_type": "code", - "execution_count": 9, + "execution_count": 37, "metadata": {}, "outputs": [ { "data": { "application/vnd.jupyter.widget-view+json": { - "model_id": "7047ce7ca26549aa803d7999620554d6", + "model_id": "a4a49c9c9cfc4acbbc3594ca30efc807", "version_major": 2, "version_minor": 0 }, @@ -861,7 +862,7 @@ "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", - "version": "3.10.6" + "version": "3.10.12" }, "voila": { "authors": "Dou Du and Giovanni Pizzi",