\n",
+ " \n",
+ " \n",
+ " **a_next[4]**:\n",
+ " | \n",
+ " \n",
+ " [ 0.59584544 0.18141802 0.61311866 0.99808218 0.85016201 0.99980978\n",
+ " -0.18887155 0.99815551 0.6531151 0.82872037]\n",
+ " | \n",
+ "
\n",
+ " \n",
+ " \n",
+ " **a_next.shape**:\n",
+ " | \n",
+ " \n",
+ " (5, 10)\n",
+ " | \n",
+ "
\n",
+ " \n",
+ " \n",
+ " **yt[1]**:\n",
+ " | \n",
+ " \n",
+ " [ 0.9888161 0.01682021 0.21140899 0.36817467 0.98988387 0.88945212\n",
+ " 0.36920224 0.9966312 0.9982559 0.17746526]\n",
+ " | \n",
+ "
\n",
+ " \n",
+ " \n",
+ " **yt.shape**:\n",
+ " | \n",
+ " \n",
+ " (2, 10)\n",
+ " | \n",
+ "
\n",
+ "\n",
+ "
"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "## 1.2 - RNN forward pass \n",
+ "\n",
+ "You can see an RNN as the repetition of the cell you've just built. If your input sequence of data is carried over 10 time steps, then you will copy the RNN cell 10 times. Each cell takes as input the hidden state from the previous cell ($a^{\\langle t-1 \\rangle}$) and the current time-step's input data ($x^{\\langle t \\rangle}$). It outputs a hidden state ($a^{\\langle t \\rangle}$) and a prediction ($y^{\\langle t \\rangle}$) for this time-step.\n",
+ "\n",
+ "\n",
+ "\n",
+ " \n",
+ " \n",
+ " **a[4][1]**:\n",
+ " | \n",
+ " \n",
+ " [-0.99999375 0.77911235 -0.99861469 -0.99833267]\n",
+ " | \n",
+ "
\n",
+ " \n",
+ " \n",
+ " **a.shape**:\n",
+ " | \n",
+ " \n",
+ " (5, 10, 4)\n",
+ " | \n",
+ "
\n",
+ " \n",
+ " \n",
+ " **y[1][3]**:\n",
+ " | \n",
+ " \n",
+ " [ 0.79560373 0.86224861 0.11118257 0.81515947]\n",
+ " | \n",
+ "
\n",
+ " \n",
+ " \n",
+ " **y.shape**:\n",
+ " | \n",
+ " \n",
+ " (2, 10, 4)\n",
+ " | \n",
+ "
\n",
+ " \n",
+ " \n",
+ " **cache[1][1][3]**:\n",
+ " | \n",
+ " \n",
+ " [-1.1425182 -0.34934272 -0.20889423 0.58662319]\n",
+ " | \n",
+ "
\n",
+ " \n",
+ " \n",
+ " **len(cache)**:\n",
+ " | \n",
+ " \n",
+ " 2\n",
+ " | \n",
+ "
\n",
+ "\n",
+ "
"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "Congratulations! You've successfully built the forward propagation of a recurrent neural network from scratch. This will work well enough for some applications, but it suffers from vanishing gradient problems. So it works best when each output $y^{\\langle t \\rangle}$ can be estimated using mainly \"local\" context (meaning information from inputs $x^{\\langle t' \\rangle}$ where $t'$ is not too far from $t$). \n",
+ "\n",
+ "In the next part, you will build a more complex LSTM model, which is better at addressing vanishing gradients. The LSTM will be better able to remember a piece of information and keep it saved for many timesteps. "
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "## 2 - Long Short-Term Memory (LSTM) network\n",
+ "\n",
+ "This following figure shows the operations of an LSTM-cell.\n",
+ "\n",
+ "\n",
+ " \n",
+ " \n",
+ " **a_next[4]**:\n",
+ " | \n",
+ " \n",
+ " [-0.66408471 0.0036921 0.02088357 0.22834167 -0.85575339 0.00138482\n",
+ " 0.76566531 0.34631421 -0.00215674 0.43827275]\n",
+ " | \n",
+ "
\n",
+ " \n",
+ " \n",
+ " **a_next.shape**:\n",
+ " | \n",
+ " \n",
+ " (5, 10)\n",
+ " | \n",
+ "
\n",
+ " \n",
+ " \n",
+ " **c_next[2]**:\n",
+ " | \n",
+ " \n",
+ " [ 0.63267805 1.00570849 0.35504474 0.20690913 -1.64566718 0.11832942\n",
+ " 0.76449811 -0.0981561 -0.74348425 -0.26810932]\n",
+ " | \n",
+ "
\n",
+ " \n",
+ " \n",
+ " **c_next.shape**:\n",
+ " | \n",
+ " \n",
+ " (5, 10)\n",
+ " | \n",
+ "
\n",
+ " \n",
+ " \n",
+ " **yt[1]**:\n",
+ " | \n",
+ " \n",
+ " [ 0.79913913 0.15986619 0.22412122 0.15606108 0.97057211 0.31146381\n",
+ " 0.00943007 0.12666353 0.39380172 0.07828381]\n",
+ " | \n",
+ "
\n",
+ " \n",
+ " \n",
+ " **yt.shape**:\n",
+ " | \n",
+ " \n",
+ " (2, 10)\n",
+ " | \n",
+ "
\n",
+ " \n",
+ " \n",
+ " **cache[1][3]**:\n",
+ " | \n",
+ " \n",
+ " [-0.16263996 1.03729328 0.72938082 -0.54101719 0.02752074 -0.30821874\n",
+ " 0.07651101 -1.03752894 1.41219977 -0.37647422]\n",
+ " | \n",
+ "
\n",
+ " \n",
+ " \n",
+ " **len(cache)**:\n",
+ " | \n",
+ " \n",
+ " 10\n",
+ " | \n",
+ "
\n",
+ "\n",
+ "
"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "### 2.2 - Forward pass for LSTM\n",
+ "\n",
+ "Now that you have implemented one step of an LSTM, you can now iterate this over this using a for-loop to process a sequence of $T_x$ inputs. \n",
+ "\n",
+ "\n",
+ " \n",
+ " \n",
+ " **a[4][3][6]** =\n",
+ " | \n",
+ " \n",
+ " 0.172117767533\n",
+ " | \n",
+ "
\n",
+ " \n",
+ " \n",
+ " **a.shape** =\n",
+ " | \n",
+ " \n",
+ " (5, 10, 7)\n",
+ " | \n",
+ "
\n",
+ " \n",
+ " \n",
+ " **y[1][4][3]** =\n",
+ " | \n",
+ " \n",
+ " 0.95087346185\n",
+ " | \n",
+ "
\n",
+ " \n",
+ " \n",
+ " **y.shape** =\n",
+ " | \n",
+ " \n",
+ " (2, 10, 7)\n",
+ " | \n",
+ "
\n",
+ " \n",
+ " \n",
+ " **caches[1][1][1]** =\n",
+ " | \n",
+ " \n",
+ " [ 0.82797464 0.23009474 0.76201118 -0.22232814 -0.20075807 0.18656139\n",
+ " 0.41005165]\n",
+ " | \n",
+ " \n",
+ "
\n",
+ " \n",
+ " \n",
+ " **c[1][2][1]** =\n",
+ " | \n",
+ " \n",
+ " -0.855544916718\n",
+ " | \n",
+ "
\n",
+ " \n",
+ " \n",
+ " \n",
+ " \n",
+ " **len(caches)** =\n",
+ " | \n",
+ " \n",
+ " 2\n",
+ " | \n",
+ "
\n",
+ "\n",
+ "
"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "Congratulations! You have now implemented the forward passes for the basic RNN and the LSTM. When using a deep learning framework, implementing the forward pass is sufficient to build systems that achieve great performance. \n",
+ "\n",
+ "The rest of this notebook is optional, and will not be graded."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "## 3 - Backpropagation in recurrent neural networks (OPTIONAL / UNGRADED)\n",
+ "\n",
+ "In modern deep learning frameworks, you only have to implement the forward pass, and the framework takes care of the backward pass, so most deep learning engineers do not need to bother with the details of the backward pass. If however you are an expert in calculus and want to see the details of backprop in RNNs, you can work through this optional portion of the notebook. \n",
+ "\n",
+ "When in an earlier course you implemented a simple (fully connected) neural network, you used backpropagation to compute the derivatives with respect to the cost to update the parameters. Similarly, in recurrent neural networks you can to calculate the derivatives with respect to the cost in order to update the parameters. The backprop equations are quite complicated and we did not derive them in lecture. However, we will briefly present them below. "
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "### 3.1 - Basic RNN backward pass\n",
+ "\n",
+ "We will start by computing the backward pass for the basic RNN-cell.\n",
+ "\n",
+ "\n",
+ " \n",
+ " \n",
+ " **gradients[\"dxt\"][1][2]** =\n",
+ " | \n",
+ " \n",
+ " -0.460564103059\n",
+ " | \n",
+ "
\n",
+ " \n",
+ " \n",
+ " **gradients[\"dxt\"].shape** =\n",
+ " | \n",
+ " \n",
+ " (3, 10)\n",
+ " | \n",
+ "
\n",
+ " \n",
+ " \n",
+ " **gradients[\"da_prev\"][2][3]** =\n",
+ " | \n",
+ " \n",
+ " 0.0842968653807\n",
+ " | \n",
+ "
\n",
+ " \n",
+ " \n",
+ " **gradients[\"da_prev\"].shape** =\n",
+ " | \n",
+ " \n",
+ " (5, 10)\n",
+ " | \n",
+ "
\n",
+ " \n",
+ " \n",
+ " **gradients[\"dWax\"][3][1]** =\n",
+ " | \n",
+ " \n",
+ " 0.393081873922\n",
+ " | \n",
+ "
\n",
+ " \n",
+ " \n",
+ " **gradients[\"dWax\"].shape** =\n",
+ " | \n",
+ " \n",
+ " (5, 3)\n",
+ " | \n",
+ "
\n",
+ " \n",
+ " \n",
+ " **gradients[\"dWaa\"][1][2]** = \n",
+ " | \n",
+ " \n",
+ " -0.28483955787\n",
+ " | \n",
+ "
\n",
+ " \n",
+ " \n",
+ " **gradients[\"dWaa\"].shape** =\n",
+ " | \n",
+ " \n",
+ " (5, 5)\n",
+ " | \n",
+ "
\n",
+ " \n",
+ " \n",
+ " **gradients[\"dba\"][4]** = \n",
+ " | \n",
+ " \n",
+ " [ 0.80517166]\n",
+ " | \n",
+ "
\n",
+ " \n",
+ " \n",
+ " **gradients[\"dba\"].shape** = \n",
+ " | \n",
+ " \n",
+ " (5, 1)\n",
+ " | \n",
+ "
\n",
+ "
"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "#### Backward pass through the RNN\n",
+ "\n",
+ "Computing the gradients of the cost with respect to $a^{\\langle t \\rangle}$ at every time-step $t$ is useful because it is what helps the gradient backpropagate to the previous RNN-cell. To do so, you need to iterate through all the time steps starting at the end, and at each step, you increment the overall $db_a$, $dW_{aa}$, $dW_{ax}$ and you store $dx$.\n",
+ "\n",
+ "**Instructions**:\n",
+ "\n",
+ "Implement the `rnn_backward` function. Initialize the return variables with zeros first and then loop through all the time steps while calling the `rnn_cell_backward` at each time timestep, update the other variables accordingly."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 56,
+ "metadata": {
+ "collapsed": true
+ },
+ "outputs": [],
+ "source": [
+ "def rnn_backward(da, caches):\n",
+ " \"\"\"\n",
+ " Implement the backward pass for a RNN over an entire sequence of input data.\n",
+ "\n",
+ " Arguments:\n",
+ " da -- Upstream gradients of all hidden states, of shape (n_a, m, T_x)\n",
+ " caches -- tuple containing information from the forward pass (rnn_forward)\n",
+ " \n",
+ " Returns:\n",
+ " gradients -- python dictionary containing:\n",
+ " dx -- Gradient w.r.t. the input data, numpy-array of shape (n_x, m, T_x)\n",
+ " da0 -- Gradient w.r.t the initial hidden state, numpy-array of shape (n_a, m)\n",
+ " dWax -- Gradient w.r.t the input's weight matrix, numpy-array of shape (n_a, n_x)\n",
+ " dWaa -- Gradient w.r.t the hidden state's weight matrix, numpy-arrayof shape (n_a, n_a)\n",
+ " dba -- Gradient w.r.t the bias, of shape (n_a, 1)\n",
+ " \"\"\"\n",
+ " \n",
+ " ### START CODE HERE ###\n",
+ " \n",
+ " # Retrieve values from the first cache (t=1) of caches (≈2 lines)\n",
+ " (caches, x) = caches\n",
+ " (a1, a0, x1, parameters) = caches[0]\n",
+ " \n",
+ " # Retrieve dimensions from da's and x1's shapes (≈2 lines)\n",
+ " n_a, m, T_x = da.shape\n",
+ " n_x, m = x1.shape\n",
+ " \n",
+ " # initialize the gradients with the right sizes (≈6 lines)\n",
+ " dx = np.zeros((n_x, m, T_x))\n",
+ " dWax = np.zeros((n_a, n_x))\n",
+ " dWaa = np.zeros((n_a, n_a))\n",
+ " dba = np.zeros((n_a, 1))\n",
+ " da0 = np.zeros((n_a, m))\n",
+ " da_prevt = np.zeros((n_a, m))\n",
+ " \n",
+ " # Loop through all the time steps\n",
+ " for t in reversed(range(T_x)):\n",
+ " # Compute gradients at time step t. Choose wisely the \"da_next\" and the \"cache\" to use in the backward propagation step. (≈1 line)\n",
+ " gradients = rnn_cell_backward(da[:,:, t] + da_prevt, caches[t])\n",
+ " # Retrieve derivatives from gradients (≈ 1 line)\n",
+ " dxt, da_prevt, dWaxt, dWaat, dbat = gradients[\"dxt\"], gradients[\"da_prev\"], gradients[\"dWax\"], gradients[\"dWaa\"], gradients[\"dba\"]\n",
+ " # Increment global derivatives w.r.t parameters by adding their derivative at time-step t (≈4 lines)\n",
+ " dx[:, :, t] = dxt\n",
+ " dWax += dWaxt\n",
+ " dWaa += dWaat\n",
+ " dba += dbat\n",
+ " \n",
+ " # Set da0 to the gradient of a which has been backpropagated through all time-steps (≈1 line) \n",
+ " da0 = da_prevt\n",
+ " ### END CODE HERE ###\n",
+ "\n",
+ " # Store the gradients in a python dictionary\n",
+ " gradients = {\"dx\": dx, \"da0\": da0, \"dWax\": dWax, \"dWaa\": dWaa,\"dba\": dba}\n",
+ " \n",
+ " return gradients"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 57,
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "gradients[\"dx\"][1][2] = [-2.07101689 -0.59255627 0.02466855 0.01483317]\n",
+ "gradients[\"dx\"].shape = (3, 10, 4)\n",
+ "gradients[\"da0\"][2][3] = -0.314942375127\n",
+ "gradients[\"da0\"].shape = (5, 10)\n",
+ "gradients[\"dWax\"][3][1] = 11.2641044965\n",
+ "gradients[\"dWax\"].shape = (5, 3)\n",
+ "gradients[\"dWaa\"][1][2] = 2.30333312658\n",
+ "gradients[\"dWaa\"].shape = (5, 5)\n",
+ "gradients[\"dba\"][4] = [-0.74747722]\n",
+ "gradients[\"dba\"].shape = (5, 1)\n"
+ ]
+ }
+ ],
+ "source": [
+ "np.random.seed(1)\n",
+ "x = np.random.randn(3,10,4)\n",
+ "a0 = np.random.randn(5,10)\n",
+ "Wax = np.random.randn(5,3)\n",
+ "Waa = np.random.randn(5,5)\n",
+ "Wya = np.random.randn(2,5)\n",
+ "ba = np.random.randn(5,1)\n",
+ "by = np.random.randn(2,1)\n",
+ "parameters = {\"Wax\": Wax, \"Waa\": Waa, \"Wya\": Wya, \"ba\": ba, \"by\": by}\n",
+ "a, y, caches = rnn_forward(x, a0, parameters)\n",
+ "da = np.random.randn(5, 10, 4)\n",
+ "gradients = rnn_backward(da, caches)\n",
+ "\n",
+ "print(\"gradients[\\\"dx\\\"][1][2] =\", gradients[\"dx\"][1][2])\n",
+ "print(\"gradients[\\\"dx\\\"].shape =\", gradients[\"dx\"].shape)\n",
+ "print(\"gradients[\\\"da0\\\"][2][3] =\", gradients[\"da0\"][2][3])\n",
+ "print(\"gradients[\\\"da0\\\"].shape =\", gradients[\"da0\"].shape)\n",
+ "print(\"gradients[\\\"dWax\\\"][3][1] =\", gradients[\"dWax\"][3][1])\n",
+ "print(\"gradients[\\\"dWax\\\"].shape =\", gradients[\"dWax\"].shape)\n",
+ "print(\"gradients[\\\"dWaa\\\"][1][2] =\", gradients[\"dWaa\"][1][2])\n",
+ "print(\"gradients[\\\"dWaa\\\"].shape =\", gradients[\"dWaa\"].shape)\n",
+ "print(\"gradients[\\\"dba\\\"][4] =\", gradients[\"dba\"][4])\n",
+ "print(\"gradients[\\\"dba\\\"].shape =\", gradients[\"dba\"].shape)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "**Expected Output**:\n",
+ "\n",
+ "\n",
+ " \n",
+ " \n",
+ " **gradients[\"dx\"][1][2]** =\n",
+ " | \n",
+ " \n",
+ " [-2.07101689 -0.59255627 0.02466855 0.01483317]\n",
+ " | \n",
+ "
\n",
+ " \n",
+ " \n",
+ " **gradients[\"dx\"].shape** =\n",
+ " | \n",
+ " \n",
+ " (3, 10, 4)\n",
+ " | \n",
+ "
\n",
+ " \n",
+ " \n",
+ " **gradients[\"da0\"][2][3]** =\n",
+ " | \n",
+ " \n",
+ " -0.314942375127\n",
+ " | \n",
+ "
\n",
+ " \n",
+ " \n",
+ " **gradients[\"da0\"].shape** =\n",
+ " | \n",
+ " \n",
+ " (5, 10)\n",
+ " | \n",
+ "
\n",
+ " \n",
+ " \n",
+ " **gradients[\"dWax\"][3][1]** =\n",
+ " | \n",
+ " \n",
+ " 11.2641044965\n",
+ " | \n",
+ "
\n",
+ " \n",
+ " \n",
+ " **gradients[\"dWax\"].shape** =\n",
+ " | \n",
+ " \n",
+ " (5, 3)\n",
+ " | \n",
+ "
\n",
+ " \n",
+ " \n",
+ " **gradients[\"dWaa\"][1][2]** = \n",
+ " | \n",
+ " \n",
+ " 2.30333312658\n",
+ " | \n",
+ "
\n",
+ " \n",
+ " \n",
+ " **gradients[\"dWaa\"].shape** =\n",
+ " | \n",
+ " \n",
+ " (5, 5)\n",
+ " | \n",
+ "
\n",
+ " \n",
+ " \n",
+ " **gradients[\"dba\"][4]** = \n",
+ " | \n",
+ " \n",
+ " [-0.74747722]\n",
+ " | \n",
+ "
\n",
+ " \n",
+ " \n",
+ " **gradients[\"dba\"].shape** = \n",
+ " | \n",
+ " \n",
+ " (5, 1)\n",
+ " | \n",
+ "
\n",
+ "
"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "## 3.2 - LSTM backward pass"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "### 3.2.1 One Step backward\n",
+ "\n",
+ "The LSTM backward pass is slighltly more complicated than the forward one. We have provided you with all the equations for the LSTM backward pass below. (If you enjoy calculus exercises feel free to try deriving these from scratch yourself.) \n",
+ "\n",
+ "### 3.2.2 gate derivatives\n",
+ "\n",
+ "$$d \\Gamma_o^{\\langle t \\rangle} = da_{next}*\\tanh(c_{next}) * \\Gamma_o^{\\langle t \\rangle}*(1-\\Gamma_o^{\\langle t \\rangle})\\tag{7}$$\n",
+ "\n",
+ "$$d\\tilde c^{\\langle t \\rangle} = dc_{next}*\\Gamma_i^{\\langle t \\rangle}+ \\Gamma_o^{\\langle t \\rangle} (1-\\tanh(c_{next})^2) * i_t * da_{next} * \\tilde c^{\\langle t \\rangle} * (1-\\tanh(\\tilde c)^2) \\tag{8}$$\n",
+ "\n",
+ "$$d\\Gamma_u^{\\langle t \\rangle} = dc_{next}*\\tilde c^{\\langle t \\rangle} + \\Gamma_o^{\\langle t \\rangle} (1-\\tanh(c_{next})^2) * \\tilde c^{\\langle t \\rangle} * da_{next}*\\Gamma_u^{\\langle t \\rangle}*(1-\\Gamma_u^{\\langle t \\rangle})\\tag{9}$$\n",
+ "\n",
+ "$$d\\Gamma_f^{\\langle t \\rangle} = dc_{next}*\\tilde c_{prev} + \\Gamma_o^{\\langle t \\rangle} (1-\\tanh(c_{next})^2) * c_{prev} * da_{next}*\\Gamma_f^{\\langle t \\rangle}*(1-\\Gamma_f^{\\langle t \\rangle})\\tag{10}$$\n",
+ "\n",
+ "### 3.2.3 parameter derivatives \n",
+ "\n",
+ "$$ dW_f = d\\Gamma_f^{\\langle t \\rangle} * \\begin{pmatrix} a_{prev} \\\\ x_t\\end{pmatrix}^T \\tag{11} $$\n",
+ "$$ dW_u = d\\Gamma_u^{\\langle t \\rangle} * \\begin{pmatrix} a_{prev} \\\\ x_t\\end{pmatrix}^T \\tag{12} $$\n",
+ "$$ dW_c = d\\tilde c^{\\langle t \\rangle} * \\begin{pmatrix} a_{prev} \\\\ x_t\\end{pmatrix}^T \\tag{13} $$\n",
+ "$$ dW_o = d\\Gamma_o^{\\langle t \\rangle} * \\begin{pmatrix} a_{prev} \\\\ x_t\\end{pmatrix}^T \\tag{14}$$\n",
+ "\n",
+ "To calculate $db_f, db_u, db_c, db_o$ you just need to sum across the horizontal (axis= 1) axis on $d\\Gamma_f^{\\langle t \\rangle}, d\\Gamma_u^{\\langle t \\rangle}, d\\tilde c^{\\langle t \\rangle}, d\\Gamma_o^{\\langle t \\rangle}$ respectively. Note that you should have the `keep_dims = True` option.\n",
+ "\n",
+ "Finally, you will compute the derivative with respect to the previous hidden state, previous memory state, and input.\n",
+ "\n",
+ "$$ da_{prev} = W_f^T*d\\Gamma_f^{\\langle t \\rangle} + W_u^T * d\\Gamma_u^{\\langle t \\rangle}+ W_c^T * d\\tilde c^{\\langle t \\rangle} + W_o^T * d\\Gamma_o^{\\langle t \\rangle} \\tag{15}$$\n",
+ "Here, the weights for equations 13 are the first n_a, (i.e. $W_f = W_f[:n_a,:]$ etc...)\n",
+ "\n",
+ "$$ dc_{prev} = dc_{next}\\Gamma_f^{\\langle t \\rangle} + \\Gamma_o^{\\langle t \\rangle} * (1- \\tanh(c_{next})^2)*\\Gamma_f^{\\langle t \\rangle}*da_{next} \\tag{16}$$\n",
+ "$$ dx^{\\langle t \\rangle} = W_f^T*d\\Gamma_f^{\\langle t \\rangle} + W_u^T * d\\Gamma_u^{\\langle t \\rangle}+ W_c^T * d\\tilde c_t + W_o^T * d\\Gamma_o^{\\langle t \\rangle}\\tag{17} $$\n",
+ "where the weights for equation 15 are from n_a to the end, (i.e. $W_f = W_f[n_a:,:]$ etc...)\n",
+ "\n",
+ "**Exercise:** Implement `lstm_cell_backward` by implementing equations $7-17$ below. Good luck! :)"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 30,
+ "metadata": {
+ "collapsed": true
+ },
+ "outputs": [],
+ "source": [
+ "def lstm_cell_backward(da_next, dc_next, cache):\n",
+ " \"\"\"\n",
+ " Implement the backward pass for the LSTM-cell (single time-step).\n",
+ "\n",
+ " Arguments:\n",
+ " da_next -- Gradients of next hidden state, of shape (n_a, m)\n",
+ " dc_next -- Gradients of next cell state, of shape (n_a, m)\n",
+ " cache -- cache storing information from the forward pass\n",
+ "\n",
+ " Returns:\n",
+ " gradients -- python dictionary containing:\n",
+ " dxt -- Gradient of input data at time-step t, of shape (n_x, m)\n",
+ " da_prev -- Gradient w.r.t. the previous hidden state, numpy array of shape (n_a, m)\n",
+ " dc_prev -- Gradient w.r.t. the previous memory state, of shape (n_a, m, T_x)\n",
+ " dWf -- Gradient w.r.t. the weight matrix of the forget gate, numpy array of shape (n_a, n_a + n_x)\n",
+ " dWi -- Gradient w.r.t. the weight matrix of the update gate, numpy array of shape (n_a, n_a + n_x)\n",
+ " dWc -- Gradient w.r.t. the weight matrix of the memory gate, numpy array of shape (n_a, n_a + n_x)\n",
+ " dWo -- Gradient w.r.t. the weight matrix of the output gate, numpy array of shape (n_a, n_a + n_x)\n",
+ " dbf -- Gradient w.r.t. biases of the forget gate, of shape (n_a, 1)\n",
+ " dbi -- Gradient w.r.t. biases of the update gate, of shape (n_a, 1)\n",
+ " dbc -- Gradient w.r.t. biases of the memory gate, of shape (n_a, 1)\n",
+ " dbo -- Gradient w.r.t. biases of the output gate, of shape (n_a, 1)\n",
+ " \"\"\"\n",
+ "\n",
+ " # Retrieve information from \"cache\"\n",
+ " (a_next, c_next, a_prev, c_prev, ft, it, cct, ot, xt, parameters) = cache\n",
+ " \n",
+ " ### START CODE HERE ###\n",
+ " # Retrieve dimensions from xt's and a_next's shape (≈2 lines)\n",
+ " n_x, m = xt.shape\n",
+ " n_a, m = a_next.shape\n",
+ " \n",
+ " # Compute gates related derivatives, you can find their values can be found by looking carefully at equations (7) to (10) (≈4 lines)\n",
+ " dot = da_next * np.tanh(c_next) * ot * (1 - ot)\n",
+ " dcct = (dc_next * it + ot * (1 - np.square(np.tanh(c_next))) * it * da_next) * (1 - np.square(cct))\n",
+ " dit = (dc_next * cct + ot * (1 - np.square(np.tanh(c_next))) * cct * da_next) * it * (1 - it)\n",
+ " dft = (dc_next * c_prev + ot *(1 - np.square(np.tanh(c_next))) * c_prev * da_next) * ft * (1 - ft)\n",
+ " \n",
+ " # Code equations (7) to (10) (≈4 lines)\n",
+ " ##dit = None\n",
+ " ##dft = None\n",
+ " ##dot = None\n",
+ " ##dcct = None\n",
+ " concat = np.concatenate((a_prev, xt), axis=0)\n",
+ "\n",
+ " # Compute parameters related derivatives. Use equations (11)-(14) (≈8 lines)\n",
+ " dWf = np.dot(dft, concat.T)\n",
+ " dWi = np.dot(dit, concat.T)\n",
+ " dWc = np.dot(dcct, concat.T)\n",
+ " dWo = np.dot(dot, concat.T)\n",
+ " dbf = np.sum(dft, axis=1 ,keepdims = True)\n",
+ " dbi = np.sum(dit, axis=1, keepdims = True)\n",
+ " dbc = np.sum(dcct, axis=1, keepdims = True)\n",
+ " dbo = np.sum(dot, axis=1, keepdims = True)\n",
+ "\n",
+ " # Compute derivatives w.r.t previous hidden state, previous memory state and input. Use equations (15)-(17). (≈3 lines)\n",
+ " da_prev = np.dot(parameters['Wf'][:, :n_a].T, dft) + np.dot(parameters['Wi'][:, :n_a].T, dit) + np.dot(parameters['Wc'][:, :n_a].T, dcct) + np.dot(parameters['Wo'][:, :n_a].T, dot)\n",
+ " dc_prev = dc_next * ft + ot * (1 - np.square(np.tanh(c_next))) * ft * da_next\n",
+ " dxt = np.dot(parameters['Wf'][:, n_a:].T, dft) + np.dot(parameters['Wi'][:, n_a:].T, dit) + np.dot(parameters['Wc'][:, n_a:].T, dcct) + np.dot(parameters['Wo'][:, n_a:].T, dot)\n",
+ " ### END CODE HERE ###\n",
+ " \n",
+ " # Save gradients in dictionary\n",
+ " gradients = {\"dxt\": dxt, \"da_prev\": da_prev, \"dc_prev\": dc_prev, \"dWf\": dWf,\"dbf\": dbf, \"dWi\": dWi,\"dbi\": dbi,\n",
+ " \"dWc\": dWc,\"dbc\": dbc, \"dWo\": dWo,\"dbo\": dbo}\n",
+ "\n",
+ " return gradients"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 31,
+ "metadata": {
+ "scrolled": false
+ },
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "gradients[\"dxt\"][1][2] = 3.23055911511\n",
+ "gradients[\"dxt\"].shape = (3, 10)\n",
+ "gradients[\"da_prev\"][2][3] = -0.0639621419711\n",
+ "gradients[\"da_prev\"].shape = (5, 10)\n",
+ "gradients[\"dc_prev\"][2][3] = 0.797522038797\n",
+ "gradients[\"dc_prev\"].shape = (5, 10)\n",
+ "gradients[\"dWf\"][3][1] = -0.147954838164\n",
+ "gradients[\"dWf\"].shape = (5, 8)\n",
+ "gradients[\"dWi\"][1][2] = 1.05749805523\n",
+ "gradients[\"dWi\"].shape = (5, 8)\n",
+ "gradients[\"dWc\"][3][1] = 2.30456216369\n",
+ "gradients[\"dWc\"].shape = (5, 8)\n",
+ "gradients[\"dWo\"][1][2] = 0.331311595289\n",
+ "gradients[\"dWo\"].shape = (5, 8)\n",
+ "gradients[\"dbf\"][4] = [ 0.18864637]\n",
+ "gradients[\"dbf\"].shape = (5, 1)\n",
+ "gradients[\"dbi\"][4] = [-0.40142491]\n",
+ "gradients[\"dbi\"].shape = (5, 1)\n",
+ "gradients[\"dbc\"][4] = [ 0.25587763]\n",
+ "gradients[\"dbc\"].shape = (5, 1)\n",
+ "gradients[\"dbo\"][4] = [ 0.13893342]\n",
+ "gradients[\"dbo\"].shape = (5, 1)\n"
+ ]
+ }
+ ],
+ "source": [
+ "np.random.seed(1)\n",
+ "xt = np.random.randn(3,10)\n",
+ "a_prev = np.random.randn(5,10)\n",
+ "c_prev = np.random.randn(5,10)\n",
+ "Wf = np.random.randn(5, 5+3)\n",
+ "bf = np.random.randn(5,1)\n",
+ "Wi = np.random.randn(5, 5+3)\n",
+ "bi = np.random.randn(5,1)\n",
+ "Wo = np.random.randn(5, 5+3)\n",
+ "bo = np.random.randn(5,1)\n",
+ "Wc = np.random.randn(5, 5+3)\n",
+ "bc = np.random.randn(5,1)\n",
+ "Wy = np.random.randn(2,5)\n",
+ "by = np.random.randn(2,1)\n",
+ "\n",
+ "parameters = {\"Wf\": Wf, \"Wi\": Wi, \"Wo\": Wo, \"Wc\": Wc, \"Wy\": Wy, \"bf\": bf, \"bi\": bi, \"bo\": bo, \"bc\": bc, \"by\": by}\n",
+ "\n",
+ "a_next, c_next, yt, cache = lstm_cell_forward(xt, a_prev, c_prev, parameters)\n",
+ "\n",
+ "da_next = np.random.randn(5,10)\n",
+ "dc_next = np.random.randn(5,10)\n",
+ "gradients = lstm_cell_backward(da_next, dc_next, cache)\n",
+ "print(\"gradients[\\\"dxt\\\"][1][2] =\", gradients[\"dxt\"][1][2])\n",
+ "print(\"gradients[\\\"dxt\\\"].shape =\", gradients[\"dxt\"].shape)\n",
+ "print(\"gradients[\\\"da_prev\\\"][2][3] =\", gradients[\"da_prev\"][2][3])\n",
+ "print(\"gradients[\\\"da_prev\\\"].shape =\", gradients[\"da_prev\"].shape)\n",
+ "print(\"gradients[\\\"dc_prev\\\"][2][3] =\", gradients[\"dc_prev\"][2][3])\n",
+ "print(\"gradients[\\\"dc_prev\\\"].shape =\", gradients[\"dc_prev\"].shape)\n",
+ "print(\"gradients[\\\"dWf\\\"][3][1] =\", gradients[\"dWf\"][3][1])\n",
+ "print(\"gradients[\\\"dWf\\\"].shape =\", gradients[\"dWf\"].shape)\n",
+ "print(\"gradients[\\\"dWi\\\"][1][2] =\", gradients[\"dWi\"][1][2])\n",
+ "print(\"gradients[\\\"dWi\\\"].shape =\", gradients[\"dWi\"].shape)\n",
+ "print(\"gradients[\\\"dWc\\\"][3][1] =\", gradients[\"dWc\"][3][1])\n",
+ "print(\"gradients[\\\"dWc\\\"].shape =\", gradients[\"dWc\"].shape)\n",
+ "print(\"gradients[\\\"dWo\\\"][1][2] =\", gradients[\"dWo\"][1][2])\n",
+ "print(\"gradients[\\\"dWo\\\"].shape =\", gradients[\"dWo\"].shape)\n",
+ "print(\"gradients[\\\"dbf\\\"][4] =\", gradients[\"dbf\"][4])\n",
+ "print(\"gradients[\\\"dbf\\\"].shape =\", gradients[\"dbf\"].shape)\n",
+ "print(\"gradients[\\\"dbi\\\"][4] =\", gradients[\"dbi\"][4])\n",
+ "print(\"gradients[\\\"dbi\\\"].shape =\", gradients[\"dbi\"].shape)\n",
+ "print(\"gradients[\\\"dbc\\\"][4] =\", gradients[\"dbc\"][4])\n",
+ "print(\"gradients[\\\"dbc\\\"].shape =\", gradients[\"dbc\"].shape)\n",
+ "print(\"gradients[\\\"dbo\\\"][4] =\", gradients[\"dbo\"][4])\n",
+ "print(\"gradients[\\\"dbo\\\"].shape =\", gradients[\"dbo\"].shape)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "**Expected Output**:\n",
+ "\n",
+ "\n",
+ " \n",
+ " \n",
+ " **gradients[\"dxt\"][1][2]** =\n",
+ " | \n",
+ " \n",
+ " 3.23055911511\n",
+ " | \n",
+ "
\n",
+ " \n",
+ " \n",
+ " **gradients[\"dxt\"].shape** =\n",
+ " | \n",
+ " \n",
+ " (3, 10)\n",
+ " | \n",
+ "
\n",
+ " \n",
+ " \n",
+ " **gradients[\"da_prev\"][2][3]** =\n",
+ " | \n",
+ " \n",
+ " -0.0639621419711\n",
+ " | \n",
+ "
\n",
+ " \n",
+ " \n",
+ " **gradients[\"da_prev\"].shape** =\n",
+ " | \n",
+ " \n",
+ " (5, 10)\n",
+ " | \n",
+ "
\n",
+ " \n",
+ " \n",
+ " **gradients[\"dc_prev\"][2][3]** =\n",
+ " | \n",
+ " \n",
+ " 0.797522038797\n",
+ " | \n",
+ "
\n",
+ " \n",
+ " \n",
+ " **gradients[\"dc_prev\"].shape** =\n",
+ " | \n",
+ " \n",
+ " (5, 10)\n",
+ " | \n",
+ "
\n",
+ " \n",
+ " \n",
+ " **gradients[\"dWf\"][3][1]** = \n",
+ " | \n",
+ " \n",
+ " -0.147954838164\n",
+ " | \n",
+ "
\n",
+ " \n",
+ " \n",
+ " **gradients[\"dWf\"].shape** =\n",
+ " | \n",
+ " \n",
+ " (5, 8)\n",
+ " | \n",
+ "
\n",
+ " \n",
+ " \n",
+ " **gradients[\"dWi\"][1][2]** = \n",
+ " | \n",
+ " \n",
+ " 1.05749805523\n",
+ " | \n",
+ "
\n",
+ " \n",
+ " \n",
+ " **gradients[\"dWi\"].shape** = \n",
+ " | \n",
+ " \n",
+ " (5, 8)\n",
+ " | \n",
+ "
\n",
+ " \n",
+ " \n",
+ " **gradients[\"dWc\"][3][1]** = \n",
+ " | \n",
+ " \n",
+ " 2.30456216369\n",
+ " | \n",
+ "
\n",
+ " \n",
+ " \n",
+ " **gradients[\"dWc\"].shape** = \n",
+ " | \n",
+ " \n",
+ " (5, 8)\n",
+ " | \n",
+ "
\n",
+ " \n",
+ " \n",
+ " **gradients[\"dWo\"][1][2]** = \n",
+ " | \n",
+ " \n",
+ " 0.331311595289\n",
+ " | \n",
+ "
\n",
+ " \n",
+ " \n",
+ " **gradients[\"dWo\"].shape** = \n",
+ " | \n",
+ " \n",
+ " (5, 8)\n",
+ " | \n",
+ "
\n",
+ " \n",
+ " \n",
+ " **gradients[\"dbf\"][4]** = \n",
+ " | \n",
+ " \n",
+ " [ 0.18864637]\n",
+ " | \n",
+ "
\n",
+ " \n",
+ " \n",
+ " **gradients[\"dbf\"].shape** = \n",
+ " | \n",
+ " \n",
+ " (5, 1)\n",
+ " | \n",
+ "
\n",
+ " \n",
+ " \n",
+ " **gradients[\"dbi\"][4]** = \n",
+ " | \n",
+ " \n",
+ " [-0.40142491]\n",
+ " | \n",
+ "
\n",
+ " \n",
+ " \n",
+ " **gradients[\"dbi\"].shape** = \n",
+ " | \n",
+ " \n",
+ " (5, 1)\n",
+ " | \n",
+ "
\n",
+ " \n",
+ " \n",
+ " **gradients[\"dbc\"][4]** = \n",
+ " | \n",
+ " \n",
+ " [ 0.25587763]\n",
+ " | \n",
+ "
\n",
+ " \n",
+ " \n",
+ " **gradients[\"dbc\"].shape** = \n",
+ " | \n",
+ " \n",
+ " (5, 1)\n",
+ " | \n",
+ "
\n",
+ " \n",
+ " \n",
+ " **gradients[\"dbo\"][4]** = \n",
+ " | \n",
+ " \n",
+ " [ 0.13893342]\n",
+ " | \n",
+ "
\n",
+ " \n",
+ " \n",
+ " **gradients[\"dbo\"].shape** = \n",
+ " | \n",
+ " \n",
+ " (5, 1)\n",
+ " | \n",
+ "
\n",
+ "
"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "### 3.3 Backward pass through the LSTM RNN\n",
+ "\n",
+ "This part is very similar to the `rnn_backward` function you implemented above. You will first create variables of the same dimension as your return variables. You will then iterate over all the time steps starting from the end and call the one step function you implemented for LSTM at each iteration. You will then update the parameters by summing them individually. Finally return a dictionary with the new gradients. \n",
+ "\n",
+ "**Instructions**: Implement the `lstm_backward` function. Create a for loop starting from $T_x$ and going backward. For each step call `lstm_cell_backward` and update the your old gradients by adding the new gradients to them. Note that `dxt` is not updated but is stored."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 66,
+ "metadata": {
+ "collapsed": true
+ },
+ "outputs": [],
+ "source": [
+ "def lstm_backward(da, caches):\n",
+ " \n",
+ " \"\"\"\n",
+ " Implement the backward pass for the RNN with LSTM-cell (over a whole sequence).\n",
+ "\n",
+ " Arguments:\n",
+ " da -- Gradients w.r.t the hidden states, numpy-array of shape (n_a, m, T_x)\n",
+ " dc -- Gradients w.r.t the memory states, numpy-array of shape (n_a, m, T_x)\n",
+ " caches -- cache storing information from the forward pass (lstm_forward)\n",
+ "\n",
+ " Returns:\n",
+ " gradients -- python dictionary containing:\n",
+ " dx -- Gradient of inputs, of shape (n_x, m, T_x)\n",
+ " da0 -- Gradient w.r.t. the previous hidden state, numpy array of shape (n_a, m)\n",
+ " dWf -- Gradient w.r.t. the weight matrix of the forget gate, numpy array of shape (n_a, n_a + n_x)\n",
+ " dWi -- Gradient w.r.t. the weight matrix of the update gate, numpy array of shape (n_a, n_a + n_x)\n",
+ " dWc -- Gradient w.r.t. the weight matrix of the memory gate, numpy array of shape (n_a, n_a + n_x)\n",
+ " dWo -- Gradient w.r.t. the weight matrix of the save gate, numpy array of shape (n_a, n_a + n_x)\n",
+ " dbf -- Gradient w.r.t. biases of the forget gate, of shape (n_a, 1)\n",
+ " dbi -- Gradient w.r.t. biases of the update gate, of shape (n_a, 1)\n",
+ " dbc -- Gradient w.r.t. biases of the memory gate, of shape (n_a, 1)\n",
+ " dbo -- Gradient w.r.t. biases of the save gate, of shape (n_a, 1)\n",
+ " \"\"\"\n",
+ "\n",
+ " # Retrieve values from the first cache (t=1) of caches.\n",
+ " (caches, x) = caches\n",
+ " (a1, c1, a0, c0, f1, i1, cc1, o1, x1, parameters) = caches[0]\n",
+ " \n",
+ " ### START CODE HERE ###\n",
+ " # Retrieve dimensions from da's and x1's shapes (≈2 lines)\n",
+ " n_a, m, T_x = da.shape\n",
+ " n_x, m = x1.shape\n",
+ " \n",
+ " # initialize the gradients with the right sizes (≈12 lines)\n",
+ " dx = np.zeros((n_x, m, T_x))\n",
+ " da0 = np.zeros((n_a, m))\n",
+ " da_prevt = np.zeros(da0.shape)\n",
+ " dc_prevt = np.zeros(da0.shape)\n",
+ " dWf = np.zeros((n_a, n_a + n_x))\n",
+ " dWi = np.zeros(dWf.shape)\n",
+ " dWc = np.zeros(dWf.shape)\n",
+ " dWo = np.zeros(dWf.shape)\n",
+ " dbf = np.zeros((n_a, 1))\n",
+ " dbi = np.zeros(dbf.shape)\n",
+ " dbc = np.zeros(dbf.shape)\n",
+ " dbo = np.zeros(dbf.shape)\n",
+ " \n",
+ " # loop back over the whole sequence\n",
+ " for t in reversed(range(T_x)):\n",
+ " # Compute all gradients using lstm_cell_backward\n",
+ " gradients = lstm_cell_backward(da[:, :, t], dc_prevt, caches[t])\n",
+ " # Store or add the gradient to the parameters' previous step's gradient\n",
+ " dx[:,:,t] = gradients[\"dxt\"]\n",
+ " dWf += gradients[\"dWf\"]\n",
+ " dWi += gradients[\"dWi\"]\n",
+ " dWc += gradients[\"dWc\"]\n",
+ " dWo += gradients[\"dWo\"]\n",
+ " dbf += gradients[\"dbf\"]\n",
+ " dbi += gradients[\"dbi\"]\n",
+ " dbc += gradients[\"dbc\"]\n",
+ " dbo += gradients[\"dbo\"]\n",
+ " # Set the first activation's gradient to the backpropagated gradient da_prev.\n",
+ " da0 = gradients[\"da_prev\"]\n",
+ " \n",
+ " ### END CODE HERE ###\n",
+ "\n",
+ " # Store the gradients in a python dictionary\n",
+ " gradients = {\"dx\": dx, \"da0\": da0, \"dWf\": dWf,\"dbf\": dbf, \"dWi\": dWi,\"dbi\": dbi,\n",
+ " \"dWc\": dWc,\"dbc\": dbc, \"dWo\": dWo,\"dbo\": dbo}\n",
+ " \n",
+ " return gradients"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 67,
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "gradients[\"dx\"][1][2] = [-0.00173313 0.08287442 -0.30545663 -0.43281115]\n",
+ "gradients[\"dx\"].shape = (3, 10, 4)\n",
+ "gradients[\"da0\"][2][3] = -0.095911501954\n",
+ "gradients[\"da0\"].shape = (5, 10)\n",
+ "gradients[\"dWf\"][3][1] = -0.0698198561274\n",
+ "gradients[\"dWf\"].shape = (5, 8)\n",
+ "gradients[\"dWi\"][1][2] = 0.102371820249\n",
+ "gradients[\"dWi\"].shape = (5, 8)\n",
+ "gradients[\"dWc\"][3][1] = -0.0624983794927\n",
+ "gradients[\"dWc\"].shape = (5, 8)\n",
+ "gradients[\"dWo\"][1][2] = 0.0484389131444\n",
+ "gradients[\"dWo\"].shape = (5, 8)\n",
+ "gradients[\"dbf\"][4] = [-0.0565788]\n",
+ "gradients[\"dbf\"].shape = (5, 1)\n",
+ "gradients[\"dbi\"][4] = [-0.15399065]\n",
+ "gradients[\"dbi\"].shape = (5, 1)\n",
+ "gradients[\"dbc\"][4] = [-0.29691142]\n",
+ "gradients[\"dbc\"].shape = (5, 1)\n",
+ "gradients[\"dbo\"][4] = [-0.29798344]\n",
+ "gradients[\"dbo\"].shape = (5, 1)\n"
+ ]
+ }
+ ],
+ "source": [
+ "np.random.seed(1)\n",
+ "x = np.random.randn(3,10,7)\n",
+ "a0 = np.random.randn(5,10)\n",
+ "Wf = np.random.randn(5, 5+3)\n",
+ "bf = np.random.randn(5,1)\n",
+ "Wi = np.random.randn(5, 5+3)\n",
+ "bi = np.random.randn(5,1)\n",
+ "Wo = np.random.randn(5, 5+3)\n",
+ "bo = np.random.randn(5,1)\n",
+ "Wc = np.random.randn(5, 5+3)\n",
+ "bc = np.random.randn(5,1)\n",
+ "\n",
+ "parameters = {\"Wf\": Wf, \"Wi\": Wi, \"Wo\": Wo, \"Wc\": Wc, \"Wy\": Wy, \"bf\": bf, \"bi\": bi, \"bo\": bo, \"bc\": bc, \"by\": by}\n",
+ "\n",
+ "a, y, c, caches = lstm_forward(x, a0, parameters)\n",
+ "\n",
+ "da = np.random.randn(5, 10, 4)\n",
+ "gradients = lstm_backward(da, caches)\n",
+ "\n",
+ "print(\"gradients[\\\"dx\\\"][1][2] =\", gradients[\"dx\"][1][2])\n",
+ "print(\"gradients[\\\"dx\\\"].shape =\", gradients[\"dx\"].shape)\n",
+ "print(\"gradients[\\\"da0\\\"][2][3] =\", gradients[\"da0\"][2][3])\n",
+ "print(\"gradients[\\\"da0\\\"].shape =\", gradients[\"da0\"].shape)\n",
+ "print(\"gradients[\\\"dWf\\\"][3][1] =\", gradients[\"dWf\"][3][1])\n",
+ "print(\"gradients[\\\"dWf\\\"].shape =\", gradients[\"dWf\"].shape)\n",
+ "print(\"gradients[\\\"dWi\\\"][1][2] =\", gradients[\"dWi\"][1][2])\n",
+ "print(\"gradients[\\\"dWi\\\"].shape =\", gradients[\"dWi\"].shape)\n",
+ "print(\"gradients[\\\"dWc\\\"][3][1] =\", gradients[\"dWc\"][3][1])\n",
+ "print(\"gradients[\\\"dWc\\\"].shape =\", gradients[\"dWc\"].shape)\n",
+ "print(\"gradients[\\\"dWo\\\"][1][2] =\", gradients[\"dWo\"][1][2])\n",
+ "print(\"gradients[\\\"dWo\\\"].shape =\", gradients[\"dWo\"].shape)\n",
+ "print(\"gradients[\\\"dbf\\\"][4] =\", gradients[\"dbf\"][4])\n",
+ "print(\"gradients[\\\"dbf\\\"].shape =\", gradients[\"dbf\"].shape)\n",
+ "print(\"gradients[\\\"dbi\\\"][4] =\", gradients[\"dbi\"][4])\n",
+ "print(\"gradients[\\\"dbi\\\"].shape =\", gradients[\"dbi\"].shape)\n",
+ "print(\"gradients[\\\"dbc\\\"][4] =\", gradients[\"dbc\"][4])\n",
+ "print(\"gradients[\\\"dbc\\\"].shape =\", gradients[\"dbc\"].shape)\n",
+ "print(\"gradients[\\\"dbo\\\"][4] =\", gradients[\"dbo\"][4])\n",
+ "print(\"gradients[\\\"dbo\\\"].shape =\", gradients[\"dbo\"].shape)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "**Expected Output**:\n",
+ "\n",
+ "\n",
+ " \n",
+ " \n",
+ " **gradients[\"dx\"][1][2]** =\n",
+ " | \n",
+ " \n",
+ " [-0.00173313 0.08287442 -0.30545663 -0.43281115]\n",
+ " | \n",
+ "
\n",
+ " \n",
+ " \n",
+ " **gradients[\"dx\"].shape** =\n",
+ " | \n",
+ " \n",
+ " (3, 10, 4)\n",
+ " | \n",
+ "
\n",
+ " \n",
+ " \n",
+ " **gradients[\"da0\"][2][3]** =\n",
+ " | \n",
+ " \n",
+ " -0.095911501954\n",
+ " | \n",
+ "
\n",
+ " \n",
+ " \n",
+ " **gradients[\"da0\"].shape** =\n",
+ " | \n",
+ " \n",
+ " (5, 10)\n",
+ " | \n",
+ "
\n",
+ " \n",
+ " \n",
+ " **gradients[\"dWf\"][3][1]** = \n",
+ " | \n",
+ " \n",
+ " -0.0698198561274\n",
+ " | \n",
+ "
\n",
+ " \n",
+ " \n",
+ " **gradients[\"dWf\"].shape** =\n",
+ " | \n",
+ " \n",
+ " (5, 8)\n",
+ " | \n",
+ "
\n",
+ " \n",
+ " \n",
+ " **gradients[\"dWi\"][1][2]** = \n",
+ " | \n",
+ " \n",
+ " 0.102371820249\n",
+ " | \n",
+ "
\n",
+ " \n",
+ " \n",
+ " **gradients[\"dWi\"].shape** = \n",
+ " | \n",
+ " \n",
+ " (5, 8)\n",
+ " | \n",
+ "
\n",
+ " \n",
+ " \n",
+ " **gradients[\"dWc\"][3][1]** = \n",
+ " | \n",
+ " \n",
+ " -0.0624983794927\n",
+ " | \n",
+ "
\n",
+ " \n",
+ " \n",
+ " **gradients[\"dWc\"].shape** = \n",
+ " | \n",
+ " \n",
+ " (5, 8)\n",
+ " | \n",
+ "
\n",
+ " \n",
+ " \n",
+ " **gradients[\"dWo\"][1][2]** = \n",
+ " | \n",
+ " \n",
+ " 0.0484389131444\n",
+ " | \n",
+ "
\n",
+ " \n",
+ " \n",
+ " **gradients[\"dWo\"].shape** = \n",
+ " | \n",
+ " \n",
+ " (5, 8)\n",
+ " | \n",
+ "
\n",
+ " \n",
+ " \n",
+ " **gradients[\"dbf\"][4]** = \n",
+ " | \n",
+ " \n",
+ " [-0.0565788]\n",
+ " | \n",
+ "
\n",
+ " \n",
+ " \n",
+ " **gradients[\"dbf\"].shape** = \n",
+ " | \n",
+ " \n",
+ " (5, 1)\n",
+ " | \n",
+ "
\n",
+ " \n",
+ " \n",
+ " **gradients[\"dbi\"][4]** = \n",
+ " | \n",
+ " \n",
+ " [-0.06997391]\n",
+ " | \n",
+ "
\n",
+ " \n",
+ " \n",
+ " **gradients[\"dbi\"].shape** = \n",
+ " | \n",
+ " \n",
+ " (5, 1)\n",
+ " | \n",
+ "
\n",
+ " \n",
+ " \n",
+ " **gradients[\"dbc\"][4]** = \n",
+ " | \n",
+ " \n",
+ " [-0.27441821]\n",
+ " | \n",
+ "
\n",
+ " \n",
+ " \n",
+ " **gradients[\"dbc\"].shape** = \n",
+ " | \n",
+ " \n",
+ " (5, 1)\n",
+ " | \n",
+ "
\n",
+ " \n",
+ " \n",
+ " **gradients[\"dbo\"][4]** = \n",
+ " | \n",
+ " \n",
+ " [ 0.16532821]\n",
+ " | \n",
+ "
\n",
+ " \n",
+ " \n",
+ " **gradients[\"dbo\"].shape** = \n",
+ " | \n",
+ " \n",
+ " (5, 1)\n",
+ " | \n",
+ "
\n",
+ "
"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "### Congratulations !\n",
+ "\n",
+ "Congratulations on completing this assignment. You now understand how recurrent neural networks work! \n",
+ "\n",
+ "Lets go on to the next exercise, where you'll use an RNN to build a character-level language model.\n"
+ ]
+ }
+ ],
+ "metadata": {
+ "coursera": {
+ "course_slug": "nlp-sequence-models",
+ "graded_item_id": "xxuVc",
+ "launcher_item_id": "X20PE"
+ },
+ "kernelspec": {
+ "display_name": "Python 3",
+ "language": "python",
+ "name": "python3"
+ },
+ "language_info": {
+ "codemirror_mode": {
+ "name": "ipython",
+ "version": 3
+ },
+ "file_extension": ".py",
+ "mimetype": "text/x-python",
+ "name": "python",
+ "nbconvert_exporter": "python",
+ "pygments_lexer": "ipython3",
+ "version": "3.6.4"
+ }
+ },
+ "nbformat": 4,
+ "nbformat_minor": 2
+}
diff --git a/Deep Learning Notebooks by Andrew NG/Sequence Models/Week1/Building a Recurrent Neural Network - Step by Step/images/LSTM.png b/Deep Learning Notebooks by Andrew NG/Sequence Models/Week1/Building a Recurrent Neural Network - Step by Step/images/LSTM.png
new file mode 100644
index 0000000..c2e333d
Binary files /dev/null and b/Deep Learning Notebooks by Andrew NG/Sequence Models/Week1/Building a Recurrent Neural Network - Step by Step/images/LSTM.png differ
diff --git a/Deep Learning Notebooks by Andrew NG/Sequence Models/Week1/Building a Recurrent Neural Network - Step by Step/images/LSTM_rnn.png b/Deep Learning Notebooks by Andrew NG/Sequence Models/Week1/Building a Recurrent Neural Network - Step by Step/images/LSTM_rnn.png
new file mode 100644
index 0000000..fbb9190
Binary files /dev/null and b/Deep Learning Notebooks by Andrew NG/Sequence Models/Week1/Building a Recurrent Neural Network - Step by Step/images/LSTM_rnn.png differ
diff --git a/Deep Learning Notebooks by Andrew NG/Sequence Models/Week1/Building a Recurrent Neural Network - Step by Step/images/clip.png b/Deep Learning Notebooks by Andrew NG/Sequence Models/Week1/Building a Recurrent Neural Network - Step by Step/images/clip.png
new file mode 100644
index 0000000..d7685cc
Binary files /dev/null and b/Deep Learning Notebooks by Andrew NG/Sequence Models/Week1/Building a Recurrent Neural Network - Step by Step/images/clip.png differ
diff --git a/Deep Learning Notebooks by Andrew NG/Sequence Models/Week1/Building a Recurrent Neural Network - Step by Step/images/initial_state.png b/Deep Learning Notebooks by Andrew NG/Sequence Models/Week1/Building a Recurrent Neural Network - Step by Step/images/initial_state.png
new file mode 100644
index 0000000..b8ec43d
Binary files /dev/null and b/Deep Learning Notebooks by Andrew NG/Sequence Models/Week1/Building a Recurrent Neural Network - Step by Step/images/initial_state.png differ
diff --git a/Deep Learning Notebooks by Andrew NG/Sequence Models/Week1/Building a Recurrent Neural Network - Step by Step/images/rnn.png b/Deep Learning Notebooks by Andrew NG/Sequence Models/Week1/Building a Recurrent Neural Network - Step by Step/images/rnn.png
new file mode 100644
index 0000000..c0ab647
Binary files /dev/null and b/Deep Learning Notebooks by Andrew NG/Sequence Models/Week1/Building a Recurrent Neural Network - Step by Step/images/rnn.png differ
diff --git a/Deep Learning Notebooks by Andrew NG/Sequence Models/Week1/Building a Recurrent Neural Network - Step by Step/images/rnn_cell_backprop.png b/Deep Learning Notebooks by Andrew NG/Sequence Models/Week1/Building a Recurrent Neural Network - Step by Step/images/rnn_cell_backprop.png
new file mode 100644
index 0000000..5bb04f4
Binary files /dev/null and b/Deep Learning Notebooks by Andrew NG/Sequence Models/Week1/Building a Recurrent Neural Network - Step by Step/images/rnn_cell_backprop.png differ
diff --git a/Deep Learning Notebooks by Andrew NG/Sequence Models/Week1/Building a Recurrent Neural Network - Step by Step/images/rnn_step_forward.png b/Deep Learning Notebooks by Andrew NG/Sequence Models/Week1/Building a Recurrent Neural Network - Step by Step/images/rnn_step_forward.png
new file mode 100644
index 0000000..260db60
Binary files /dev/null and b/Deep Learning Notebooks by Andrew NG/Sequence Models/Week1/Building a Recurrent Neural Network - Step by Step/images/rnn_step_forward.png differ
diff --git a/Deep Learning Notebooks by Andrew NG/Sequence Models/Week1/Dinosaur Island -- Character-level language model/Dinosaurus Island -- Character level language model final - v3.ipynb b/Deep Learning Notebooks by Andrew NG/Sequence Models/Week1/Dinosaur Island -- Character-level language model/Dinosaurus Island -- Character level language model final - v3.ipynb
new file mode 100644
index 0000000..4e8fbb8
--- /dev/null
+++ b/Deep Learning Notebooks by Andrew NG/Sequence Models/Week1/Dinosaur Island -- Character-level language model/Dinosaurus Island -- Character level language model final - v3.ipynb
@@ -0,0 +1,1202 @@
+{
+ "cells": [
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "# Character level language model - Dinosaurus land\n",
+ "\n",
+ "Welcome to Dinosaurus Island! 65 million years ago, dinosaurs existed, and in this assignment they are back. You are in charge of a special task. Leading biology researchers are creating new breeds of dinosaurs and bringing them to life on earth, and your job is to give names to these dinosaurs. If a dinosaur does not like its name, it might go beserk, so choose wisely! \n",
+ "\n",
+ "