Skip to content

Commit bd97bee

Browse files
committed
Fix typos.
1 parent 9726590 commit bd97bee

File tree

7 files changed

+37
-42
lines changed

7 files changed

+37
-42
lines changed

content/posts/An-Inquiry-into-Matplotlib-Figures/index.md

Lines changed: 5 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -33,13 +33,13 @@ import matplotlib as mpl
3333
---
3434
Although a beginner can follow along with this guide, it is primarily meant for people who have at least a basic knowledge of how Matplotlib's plotting functionality works.
3535

36-
Essentially, if you know how to take 2 `numpy` arrays and plot them (using an appropriate type of graph) on 2 different axes in a single figure and give it basic styling, you're good to go for the purposes of this guide.
36+
Essentially, if you know how to take 2 NumPy arrays and plot them (using an appropriate type of graph) on 2 different axes in a single figure and give it basic styling, you're good to go for the purposes of this guide.
3737

3838
If you feel you need some introduction to basic Matplotlib plotting, here's a great guide that can help you get a feel for introductory plotting using Matplotlib : https://matplotlib.org/devdocs/gallery/subplots_axes_and_figures/subplots_demo.html
3939

4040
From here on, I will be assuming that you have gained sufficient knowledge to follow along this guide.
4141

42-
Also, in order to save everyone's time, I will keep my explanations short, terse and very much to the point, and sometimes leave it for the reader to interpret things (because that's what I've done throughtout this guide for myself anyway).
42+
Also, in order to save everyone's time, I will keep my explanations short, terse and very much to the point, and sometimes leave it for the reader to interpret things (because that's what I've done throughout this guide for myself anyway).
4343

4444
The primary driver in this whole exercise will be code and not text, and I encourage you to spin up a Jupyter notebook and type in and try out everything yourself to make the best use of this resource.
4545

@@ -131,7 +131,7 @@ plt.show()
131131

132132
Our goal today is to take apart the previous snippet of code and understand all of the underlying building blocks well enough so that we can use them separately and in a much more powerful way.
133133

134-
If you're a beginner like I was before writing this guide, let me assure you: this is all very simple stuff.
134+
If you're a beginner like I was before writing this guide, let me assure you: this is all very simple stuff.
135135

136136
Going into [`plt.subplots`](https://matplotlib.org/api/_as_gen/matplotlib.pyplot.subplots.html?highlight=subplots#matplotlib.pyplot.subplots) documentation (hit `Shift+Tab+Tab` in a Jupyter notebook) reveals some of the other Matplotlib internals that it uses in order to give us the `Figure` and it's `Axes`.
137137

@@ -583,7 +583,7 @@ Here's a bullet point summary of what this means:
583583

584584
This ability to create different grid variations that `GridSpec` provides is probably the reason for that anomaly we saw a while ago (printing different Addresses).
585585

586-
It creates new objects everytime you index into it because it will be very troublesome to store all permutations of `SubplotSpec` objects into one group in memory (try and count permutations for a `GridSpec` of 10x10 and you'll know why)
586+
It creates new objects every time you index into it because it will be very troublesome to store all permutations of `SubplotSpec` objects into one group in memory (try and count permutations for a `GridSpec` of 10x10 and you'll know why)
587587

588588
---
589589
## Now let's finally create `plt.subplots(2,2)` once again using GridSpec
@@ -611,12 +611,7 @@ Here's a few things I think you should go ahead and explore:
611611
1. Multiple `GridSpec` objects for the Same Figure.
612612
2. Deleting and adding `Axes` effectively and meaningfully.
613613
3. All the methods available for `mpl.figure.Figure` and `mpl.axes.Axes` allowing us to manipulate their properties.
614-
4. Kaggle Learn's Data vizualization course is a great place to learn effective plotting using Python
614+
4. Kaggle Learn's Data visualization course is a great place to learn effective plotting using Python
615615
5. Armed with knowledge, you will be able to use other plotting libraries such as `seaborn`, `plotly`, `pandas` and `altair` with much more flexibility (you can pass an `Axes` object to all their plotting functions). I encourage you to explore these libraries too.
616616

617617
This is the first time I've written any technical guide for the internet, it may not be as clean as tutorials generally are. But, I'm open to all the constructive criticism that you may have for me (drop me an email on akashpalrecha@gmail.com)
618-
619-
620-
```python
621-
622-
```

content/posts/create-a-tesla-cybertruck-that-drives/index.md

Lines changed: 15 additions & 15 deletions
Original file line numberDiff line numberDiff line change
@@ -137,24 +137,24 @@ fig
137137

138138
![png](output_8_0.png)
139139

140-
#### Axels
140+
#### Axles
141141

142-
I used the `Rectangle` patch to represent the two 'axels' (this isn't the correct term, but you'll see what I mean) going through the tires. You must provide a coordinate for the lower left corner, a width, and a height. You can also provide it an angle (in degrees) to control its orientation. Notice that they go under the spokes plotted from above. This is due to their lower `zorder`.
142+
I used the `Rectangle` patch to represent the two 'axles' (this isn't the correct term, but you'll see what I mean) going through the tires. You must provide a coordinate for the lower left corner, a width, and a height. You can also provide it an angle (in degrees) to control its orientation. Notice that they go under the spokes plotted from above. This is due to their lower `zorder`.
143143

144144
```python
145-
def create_axels():
145+
def create_axles():
146146
ax = fig.axes[0]
147-
left_left_axel = Rectangle((.687, .427), width=.104, height=.005, angle=315, color='#202328')
148-
left_right_axel = Rectangle((.761, .427), width=.104, height=.005, angle=225, color='#202328')
149-
right_left_axel = Rectangle((1.367, .427), width=.104, height=.005, angle=315, color='#202328')
150-
right_right_axel = Rectangle((1.441, .427), width=.104, height=.005, angle=225, color='#202328')
147+
left_left_axle = Rectangle((.687, .427), width=.104, height=.005, angle=315, color='#202328')
148+
left_right_axle = Rectangle((.761, .427), width=.104, height=.005, angle=225, color='#202328')
149+
right_left_axle = Rectangle((1.367, .427), width=.104, height=.005, angle=315, color='#202328')
150+
right_right_axle = Rectangle((1.441, .427), width=.104, height=.005, angle=225, color='#202328')
151151

152-
ax.add_patch(left_left_axel)
153-
ax.add_patch(left_right_axel)
154-
ax.add_patch(right_left_axel)
155-
ax.add_patch(right_right_axel)
152+
ax.add_patch(left_left_axle)
153+
ax.add_patch(left_right_axle)
154+
ax.add_patch(right_left_axle)
155+
ax.add_patch(right_right_axle)
156156

157-
create_axels()
157+
create_axles()
158158
fig
159159
```
160160

@@ -212,7 +212,7 @@ fig
212212

213213
The head light beam has a distinct color gradient that dissipates into the night sky. This is challenging to complete. I found an [excellent answer on Stack Overflow from user Joe Kington][0] on how to do this. We begin by using the `imshow` function which creates images from 3-dimensional arrays. Our image will simply be a rectangle of colors.
214214

215-
We create a 1 x 100 x 4 array that represents 1 row by 100 columns of points of RGBA (red, green, blue, alpha) values. Every point is given the same red, green, and blue values of (0, 1, 1) which represents the color 'aqua'. The alpha value represents opacity and ranges between 0 and 1 with 0 being completely transparent (invisible) and 1 being opaque. We would like the opacity to decrease as the light extends further from the head light (that is further to the left). The numpy `linspace` function is used to create an array of 100 numbers increasing linearly from 0 to 1. This array will be set as the alpha values.
215+
We create a 1 x 100 x 4 array that represents 1 row by 100 columns of points of RGBA (red, green, blue, alpha) values. Every point is given the same red, green, and blue values of (0, 1, 1) which represents the color 'aqua'. The alpha value represents opacity and ranges between 0 and 1 with 0 being completely transparent (invisible) and 1 being opaque. We would like the opacity to decrease as the light extends further from the head light (that is further to the left). The NumPy `linspace` function is used to create an array of 100 numbers increasing linearly from 0 to 1. This array will be set as the alpha values.
216216

217217
The `extent` parameter defines the rectangular region where the image will be shown. The four values correspond to xmin, xmax, ymin, and ymax. The 100 alpha values will be mapped to this region beginning from the left. The array of alphas begins at 0, which means that the very left of this rectangular region will be transparent. The opacity will increase moving to the right-side of the rectangle where it eventually reaches 1.
218218

@@ -270,7 +270,7 @@ def draw_car():
270270
create_axes(draft=False)
271271
create_body()
272272
create_tires()
273-
create_axels()
273+
create_axles()
274274
create_other_details()
275275
create_headlight_beam()
276276
create_headlight_beam()
@@ -285,7 +285,7 @@ fig
285285

286286
Animation in Matplotlib is fairly straightforward. You must create a function that updates the position of the objects in your figure for each frame. This function is called repeatedly for each frame.
287287

288-
In the `update` function below, we loop through each patch, line, and image in our Axes and reduce the x-value of each plotted object by .015. This has the effect of moving the truck to the left. The trickiest part was changing the x and y values for the rectangular tire 'axels' so that it appeared that the tires were rotating. Some basic trigonometry helps calculate this.
288+
In the `update` function below, we loop through each patch, line, and image in our Axes and reduce the x-value of each plotted object by .015. This has the effect of moving the truck to the left. The trickiest part was changing the x and y values for the rectangular tire 'axles' so that it appeared that the tires were rotating. Some basic trigonometry helps calculate this.
289289

290290
Implicitly, Matplotlib passes the update function the frame number as an integer as the first argument. We accept this input as the parameter `frame_number`. We only use it in one place, and that is to do nothing during the first frame.
291291

content/posts/custom-3d-engine/index.md

Lines changed: 8 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -19,7 +19,7 @@ resources:
1919
Matplotlib has a really nice [3D
2020
interface](https://matplotlib.org/mpl_toolkits/mplot3d/tutorial.html) with many
2121
capabilities (and some limitations) that is quite popular among users. Yet, 3D
22-
is still considered to be some kind of black magick for some users (or maybe
22+
is still considered to be some kind of black magic for some users (or maybe
2323
for the majority of users). I would thus like to explain in this post that 3D
2424
rendering is really easy once you've understood a few concepts. To demonstrate
2525
that, we'll render the bunny above with 60 lines of Python and one Matplotlib
@@ -101,8 +101,8 @@ top bunny uses a [perspective projection](https://en.wikipedia.org/wiki/3D_proje
101101
![](projections.png)
102102

103103
In both cases, the proper way of defining a projection is first to define a
104-
viewing volume, that is, the volume in the 3d space we want to render on the
105-
scree. To do that, we need to consider 6 clipping planes (left, right, top,
104+
viewing volume, that is, the volume in the 3D space we want to render on the
105+
screen. To do that, we need to consider 6 clipping planes (left, right, top,
106106
bottom, far, near) that enclose the viewing volume (frustum) relatively to the
107107
camera. If we define a camera position and a viewing direction, each plane can
108108
be described by a single scalar. Once we have this viewing volume, we can
@@ -135,12 +135,12 @@ For the perspective projection, we also need to specify the aperture angle that
135135
plane. Consequently, for high apertures, you'll get a lot of "deformations".
136136

137137
However, if you look at the two functions above, you'll realize they return 4x4
138-
matrices while our coordinates are 3d. How to use these matrices then ? The
138+
matrices while our coordinates are 3D. How to use these matrices then ? The
139139
answer is [homogeneous
140140
coordinates](https://en.wikipedia.org/wiki/Homogeneous_coordinates). To make
141141
a long story short, homogeneous coordinates are best to deal with transformation
142142
and projections in 3D. In our case, because we're dealing with vertices (and
143-
not vectors), we only need to add 1 as the fourth coordinates (w) to all our
143+
not vectors), we only need to add 1 as the fourth coordinate (`w`) to all our
144144
vertices. Then we can apply the perspective transformation using the dot
145145
product.
146146

@@ -149,8 +149,8 @@ V = np.c_[V, np.ones(len(V))] @ perspective(25,1,1,100).T
149149
```
150150

151151
Last step, we need to re-normalize the homogeneous coordinates. This means we
152-
divide each transformed vertices with the last component (w) such as to always
153-
have w=1 for each vertices.
152+
divide each transformed vertices with the last component (`w`) such as to
153+
always have `w`=1 for each vertices.
154154

155155
```
156156
V /= V[:,3].reshape(-1,1)
@@ -265,7 +265,7 @@ And now everything is rendered right ([bunny-7.py](bunny-7.py)):
265265

266266
Let's add some colors using the depth buffer. We'll color each triangle
267267
according to it depth. The beauty of the PolyCollection object is that you can
268-
specify the color of each of the triangle using a numpy array, so let's just do
268+
specify the color of each of the triangle using a NumPy array, so let's just do
269269
that:
270270

271271
```

content/posts/matplotlib-in-data-driven-seo/index.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -18,11 +18,11 @@ resources:
1818

1919
Search Engine Optimization (SEO) is a process that aims to increase quantity and quality of website traffic by ensuring a website can be found in search engines for phrases that are relevant to what the site is offering. Google is the most popular search engine in the world and presence in top search results is invaluable for any online business since click rates drop exponentially with ranking position. Since the beginning, specialized entities have been decoding signals that influence position in search engine result page (SERP) focusing on e.g. number of outlinks, presence of keywords or content length. Developed practices typically resulted in better visibility, but needed to be constantly challenged because search engines introduce changes to their algorithms even every day. Since the rapid advancements in Big Data and machine learning finding significant ranking factors became increasingly more difficult. Thus, the whole SEO field required a shift where recommendations are backed up by large scale studies based on real data rather than old-fashioned practices. [Whites Agency](https://whites.agency/) focuses strongly on Data-Driven SEO. We run many Big Data analyses which give us insights into multiple optimization opportunities.
2020

21-
Majority of cases we are dealing with right now focus on data harvesting and analysis. Data presentation plays an important part and since the beginning, we needed a tool that would allow us to experiment with different forms of visualizations. Because our organization is Python driven, Matplotlib was a straightforward choice for us. It is a mature project that offers flexibility and control. Among other features, Matplotlib figures can be easily exported not only to raster graphic formats (png, jpg) but also to vector ones (svg, pdf, eps), creating high-quality images that can be embedded in HTML code, LaTeX or utilized by graphic designers. In one of our projects, Matplotlib was a part of the Python processing pipeline that automatically generated pdf summaries from an HTML template for individual clients. Every data visualization project has the same core presented in the figure below, where data is loaded from the database, processed in pandas or PySpark and finally visualized with Matplotlib.
21+
Majority of cases we are dealing with right now focus on data harvesting and analysis. Data presentation plays an important part and since the beginning, we needed a tool that would allow us to experiment with different forms of visualizations. Because our organization is Python driven, Matplotlib was a straightforward choice for us. It is a mature project that offers flexibility and control. Among other features, Matplotlib figures can be easily exported not only to raster graphic formats (PNG, JPG) but also to vector ones (SVG, PDF, EPS), creating high-quality images that can be embedded in HTML code, LaTeX or utilized by graphic designers. In one of our projects, Matplotlib was a part of the Python processing pipeline that automatically generated PDF summaries from an HTML template for individual clients. Every data visualization project has the same core presented in the figure below, where data is loaded from the database, processed in pandas or PySpark and finally visualized with Matplotlib.
2222

2323
![Data Visualization Pipeline at Whites Agency](fig1.png)
2424

25-
In what follows, we would like to share two insights from our studies. All figures were prepeared in Matplotlib and in each case we set up a global style (overwritten if necessary):
25+
In what follows, we would like to share two insights from our studies. All figures were prepared in Matplotlib and in each case we set up a global style (overwritten if necessary):
2626
```
2727
import matplotlib.pyplot as plt
2828
from cycler import cycler
@@ -47,7 +47,7 @@ plt.rcParams['ytick.labelsize'] = 13
4747
plt.rcParams['lines.linewidth'] = 2.0
4848
```
4949
# Case 1: Website Speed Performance
50-
Our R&D department analyzed a set of 10,000 potential customer intent phrases from ​​the *Electronics* (eCommerce) and *News* domains (5000 phrases each). They scraped data from the Google ranking in a specific location (London, United Kingdom) both for mobile and desktop results [full study available [here](https://whites.agency/blog/google-lighthouse-study-seo-ranking-factors-in-ecommerce-vs-news/)]. Based on those data, we distinguished TOP 20 results that appeared in SERPs. Next, each page was audited with the [Google Lighthouse tool](https://developers.google.com/web/tools/lighthouse). Google Lighthouse is an open-source, automated tool for improving the quality of web pages, that among other collects information about website loading time. A single sample from our analysis which shows variations of *Time to First Byte* (TTFB) as a function of Google position (grouped in threes) is presented below. TTFB measures the time it takes for a user's browser to receive the first byte of page content. Regardless of the device, TTFB score is the lowest for websites that occurred in TOP 3 positions. The difference is significant, especially between TOP 3 and 4-6 results. Therefore, Google favors websites that respond fast and therefore it is adviced to invest in website speed optimization.
50+
Our R&D department analyzed a set of 10,000 potential customer intent phrases from the *Electronics* (eCommerce) and *News* domains (5000 phrases each). They scraped data from the Google ranking in a specific location (London, United Kingdom) both for mobile and desktop results [full study available [here](https://whites.agency/blog/google-lighthouse-study-seo-ranking-factors-in-ecommerce-vs-news/)]. Based on those data, we distinguished TOP 20 results that appeared in SERPs. Next, each page was audited with the [Google Lighthouse tool](https://developers.google.com/web/tools/lighthouse). Google Lighthouse is an open-source, automated tool for improving the quality of web pages, that among other collects information about website loading time. A single sample from our analysis which shows variations of *Time to First Byte* (TTFB) as a function of Google position (grouped in threes) is presented below. TTFB measures the time it takes for a user's browser to receive the first byte of page content. Regardless of the device, TTFB score is the lowest for websites that occurred in TOP 3 positions. The difference is significant, especially between TOP 3 and 4-6 results. Therefore, Google favors websites that respond fast and therefore it is advised to invest in website speed optimization.
5151

5252
![Time to first byte from Lighthouse study performed at Whites Agency.](fig2.png)
5353

71.1 KB
Loading

0 commit comments

Comments
 (0)