`npm install`

like all the cool kids which lead to adding jekyll-node-module to the site to be able to copy things in from `node_packages`

without committing it all to the repo.
The 3D outline rendered images on the projects pages are done with code from Omaha Shehata.

## Future There are a few plugins I’m thinking of adding:

- jekyll-auto-image to simply choosing a representative image for a particular post.
- I’d like to write a jekyll plugin for D2 diagrams. It doesn’t look too difficult.
- I’m wondering if I should switch the rendering to use pandoc, I used pandoc in the past to convert my thesis to multiple output formats and I suspect it is now a more active project than jekyll itself, for example someone has already written a D2 pandoc filter.

There’s a good list of jekyll plugins here.

]]>The vector data is entirely served from a static file on this server. Most interactive web maps work by constantly requesting little map images from an external server at different zoom levels. This approach uses much less data and doesn’t require an external server to host all the map data.

Getting this to work was a little tricky, I mostly followed the steps from Simon Willison’s post but I didn’t want to use npm. As I write this I realise that this site is generated with jekyll which uses npm anyway but somehow I would like the individual posts to Just Work™ without worrying about updating libraries and npm.

So I grabbed `maplibre-gl.css`

, `maplibre-gl.js`

and `pmtiles.js`

, plonked them into this site and started hacking around. I ended up mashing up the code from Simon Willison’s post and the official examples to get something that worked.

I figured out from this github issue how to grab a module version of `protomaps-themes-base`

without npm. However I don’t really like the styles in produces. Instead I played around a bit with the generated json styles to make something that looks a bit more like the Stamen Toner theme.

Looking at the source code for `protomaps-themes-base`

I realise I could probably make custom themes much more easily by just swapping out the theme variables in the package.

Todo:

- Figure out how to use maputnik to generate styles for PMTiles.

I had a bunch of issues with getting that to work mostly based around the fact that these tiles are raster images that are intended for streaming to a zoomable and panable viewer on a screen. The design tradeoff of the maps don’t quite make as much sense when you start transfering them to a static image. I did some hacks to use the tiles intended for a higher zoom level but you can only take that so far before the text starts getting unreadable.

I think there is a better approach that involves getting raw OpenStreetMap data and rendering it directly using something like QGIS and some kind of map style files but that seems like a whole new deep rabbit hole I’m not ready to fall into just yet.

]]>And here’s a quick fit test. When doing joints like this you need to compensate for the kerf of the laser (maybe kerf isn’t the right word but you know what I mean). I found about 0.1mm kerf compensation worked well for this laser with 6mm ply.

]]>These black and white map tiles are from Stamen design, essentially a really nice style sheet on top of © OpenStreetMap contributor data. The rest are OS Maps from the National Library of Scotland. The viewer is leaflet.js.

In related news, my excellent co-working space / carpentry workshop / pottery studio currently has a massive laser cutter which we may or may not keep for the long term.

Given the laser cutter is so massive I thought it might be fun to try to produce a huge map. There’s a spot at the top of the stairs in our flat that I think could be nice for it. My partner and I have always always lived somewhere in this vertical strip of london so the tall thin shape has some significance.

Given how long those took to cut, I’m thinking that I’ll split the design into multiple panels so I don’t have to babysit the laser cutter for 24 hours.

Let’s see how that pans out next time!

]]>Anyway, to celebrate the occasion and because I have now have a reason to think about how fast I might run a particular distance. I had a look at my historical run data. There’s a great website called statshunter that you can authorise to Strava and from which you can download a little csv of all your runs. The first logical thing I could think to do is to see how fast I tend to run different distances.

A friend lent me a huge running book which I’m going to dig through more but I suspect one of the conclusions will be a bit obvious: I could run those shorter distances a lot faster.

That same friend also lent me a heart rate watch which I’ve been playing with. So the next thing I want to learn about is what type of heart rates you should target when you train for a particular event.

Code:

```
from matplotlib import pyplot as plt
import numpy as np
from datetime import datetime
import pandas as pd
runs = pd.read_csv("runs.csv", parse_dates = ["Date"]) # Get this from statshunter.com
f, (ax2, ax) = plt.subplots(nrows=2, figsize = (5,5), sharex = True,
gridspec_kw = dict(height_ratios = (1,2)))
ax.set(ylabel = "Moving Time (mins)", xlabel = "Distance (km)")
x = runs["Distance (m)"].values/1e3
y = runs["Moving time"].values/60
dists = np.linspace(1, 25, 2)
for i in [5,6,7]:
mins_per_km = i * dists
ax.plot(dists, mins_per_km, color = "black", linestyle = "dotted", label = f"{i} min/km")
ax.text(25.5, 25*i, f"{i} min/km", va = "center")
ax.annotate("Half Marathon!", (x[0], y[0]-1), (20, 50), arrowprops = dict(arrowstyle = "->"))
ax.scatter(x, y, s=20, alpha = 0.6*fade_out_by_date(runs["Date"]))
for a in [ax, ax2]: a.spines[['right', 'top']].set_visible(False)
ax2.hist(x, bins = 30, alpha = 0.5)
ax2.set(yticks=[], ylabel = "Frequency Density")
f.savefig("time_vs_distance_plus_hist.svg", transparent=True)
```

Incidentally this is also how podcasts work, at least for a while, Spotify is clearly trying to capture it.

Anyway, I usually use theoldreader to read RSS feeds but lately they’ve implemented a premium version that you have to pay $3 a month for if you have more than 100 feeds (I have 99…).

Honestly, I use their service a lot so somehow $3 doesn’t seem so bad, but it spurred me to look into selfhosting.

Selfhosting seems to be all the rage these days. Probably in response to feeling locked in to corporate mega structures, the aforementioned computery nerdy types have gone looking for ways to maintain their own anarchic web infrastructure. See i.e the indieweb movement, mastodon etc etc etc

So I want to try out some self hosting. Let’s start with an RSS reader. Miniflux seems well regarded. So I popped over their, grabbed a `docker-compose.yml`

, ran `docker compose up -d`

and we seem to be off to the races.

Ok, a nice thing about Miniflux when compared to theoldreader is the former seems to be better at telling you when there’s something wrong with your feeds. It told me about a few blogs it couldn’t reach, notably Derek Lowe’s excellent blog about chemical drug discovery.

That blog has an rss feed, which loads perfectly find in my browser but doesn’t seem to work when outside of that context, i.e in python:

```
>>> import requests
>>> requests.get("https://blogs.sciencemag.org/pipeline/feed")
<Response [403]>
```

Playing around a bit more, adding in useragents, accepting cookies and following redirects, I eventually get back a page with a challenge that requires JS to run. This is the antithesis of how RSS should work!

Ok so to fix this I came upon RSSHub which is a kind of RSS proxy, it parses sites that don’t have RSS feeds and generates them for you. I saw that this has pupeteer support so I’m hopping that I can use it to bypass the anti-crawler tactics science.org is using.

Anyway, for how here is a docker-compose.yml for both miniflux and RSSHub. What took me a while to figure out is that docker containers live in their own special network. So to subscribe to a selfhosted RSSHub feed you need to put something like “http://rsshub:1200/” where rsshub is the key to the image in the yaml file below.

EDIT: I got it to work using puppeteer! For now the code is in a branch for which I’ll do a proper PR soon.

```
version: '3'
services:
miniflux:
image: miniflux/miniflux:latest
# build:
# context: .
# dockerfile: packaging/docker/alpine/Dockerfile
container_name: miniflux
restart: always
healthcheck:
test: ["CMD", "/usr/bin/miniflux", "-healthcheck", "auto"]
ports:
- "8889:8080"
depends_on:
- rsshub
- db
environment:
- DATABASE_URL=postgres://miniflux:secret@db/miniflux?sslmode=disable
- RUN_MIGRATIONS=1
- CREATE_ADMIN=1
- ADMIN_USERNAME=admin
- ADMIN_PASSWORD=test123
db:
image: postgres:15
environment:
- POSTGRES_USER=miniflux
- POSTGRES_PASSWORD=secret
volumes:
- miniflux-db:/var/lib/postgresql/data
healthcheck:
test: ["CMD", "pg_isready", "-U", "miniflux"]
interval: 10s
start_period: 30s
rsshub:
# two ways to enable puppeteer:
# * comment out marked lines, then use this image instead: diygod/rsshub:chromium-bundled
# * (consumes more disk space and memory) leave everything unchanged
image: diygod/rsshub
restart: always
ports:
- '1200:1200'
environment:
NODE_ENV: production
CACHE_TYPE: redis
REDIS_URL: 'redis://redis:6379/'
PUPPETEER_WS_ENDPOINT: 'ws://browserless:3000' # marked
depends_on:
- redis
- browserless # marked
browserless: # marked
image: browserless/chrome # marked
restart: always # marked
ulimits: # marked
core: # marked
hard: 0 # marked
soft: 0 # marked
redis:
image: redis:alpine
restart: always
volumes:
- redis-data:/data
volumes:
miniflux-db:
redis-data:
```

## Backup RSS feed list I put a small script in the repo to backup.

```
python -m env ~/miniflux_python_env
source ~/miniflux_python_env/bin/activate
pip install pyyaml
```

I’ve collected the code for the docker containers and config together into this repo.

## Backup everything to google drive Use rclone

]]>```
print("Hello, world!")
```

There’s even a build of python (with the magic of WASM) that includes numpy and pandas!

```
import numpy as np
import pandas as pd
np.arange(12).reshape(3,-1)
```

The cells (of a single language) all use the same interpreter so you can share variables across. However this doesn’t seem to work when the page first loads.

```
import numpy as np
import pandas as pd
a = np.arange(12).reshape(3,-1)
df = pd.DataFrame({"zero" : a[0], "one" : a[2], "twp" : a[2]})
df
```

Hopefully in future this could also hook into the nice html output that many libraries like pandas can produce!

]]>`import json`

away. However if the language is a bit more niche, there maybe won’t be a good parser for it available or that parser might be missing features.
Recently I came across a tiny language at work that looks like this:

```
[foo, bar, bazz
[more, names, of, things
[even, more]]]
[another, one, [here, too]]
```

I won’t get into what this is but it was an interesting excuse to much about with writing a grammar for a parser, something I had never tried before. So I found a library, after a false start, I settled on pe. Don’t ask me what the gold standard in this space is, but I like pe.

To avoid getting too verbose, let’s just see some examples. Let’s start with an easy version of this problem: “[a, b, c]”.

```
import pe
parser = pe.compile(
r'''
List <- "[" String ("," Spacing String)* "]"
String <- ~[a-zA-Z]+
Spacing <- [\t\n\f\r ]*
''',
)
parser.match("[a, b, c]").groups()
>>> ('a', 'b', 'c')
```

So what’s going on here? Many characters mean the same as they do in regular expressions, so “[a-zA-Z]+” is one or more upper or lowercase letters while “[\t\n\f\r ]*” matches zero or more whitespace characters. The tilde “~” tells pe that we’re interested in keeping the string, while we don’t really care about the spacing characters. The pattern “String (“,” Spacing String)*” seems to be the classic way to express a list like structure or arbitrary length.

Whitespace turns out to be annoying, “[ a, b, c]” does not parse with this, we’d have to change the grammar to something like this:

```
import pe
parser = pe.compile(
r'''
List <- "[" Spacing String (Comma String)* Spacing "]"
Comma <- Spacing "," Spacing
String <- ~[a-zA-Z]+
Spacing <- [\t\n\f\r ]*
''',
)
parser.match("[ a, b , c ]").groups()
```

NB: there is a branch of pe, which hopefully will be merged soon, that includes the ability to auto-ignore whitespace.

We can now allow nested lists by changing the grammar slightly, we also add a hint to pe for what kind of python object to make from each rule:

```
import pe
from pe.actions import Pack
parser = pe.compile(
r'''
List <- "[" Spacing Value (Comma Value)* Spacing "]"
Value <- List / String
Comma <- Spacing "," Spacing
String <- ~[a-zA-Z]+
Spacing <- [\t\n\f\r ]*
''',
actions={
'List': Pack(list),
},
)
parser.match("[ a, b , c, [d, e, f]]").value()
>>> ['a', 'b', 'c', ['d', 'e', 'f']]
```

I’ll wrap up here because this post already feels long but one thing I really like about pe is that you can easily push parts of what you’re parsing into named arguments to python functions, in the below I have set it up so that anytime a “Name” rule gets parsed, the parser will call `Name(name = "foo", value = "bar")`

and this even works well with optional values too.

```
import pe
from pe.actions import Pack
from dataclasses import dataclass
@dataclass
class N:
name: str
value: str | None = None
parser = pe.compile(
r'''
List <- "[" Spacing Value (Comma Value)* Spacing "]"
Value <- List / Name
Name <- name:String Spacing ("=" Spacing value:String)?
Comma <- Spacing "," Spacing
String <- ~[a-zA-Z]+
Spacing <- [\t\n\f\r ]*
''',
actions={
'List': Pack(list),
'Name': N,
},
)
parser.match("[ a=b, b=g, c, [d, e, f]]").value()
>>>[N(name='a', value='b'),
N(name='b', value='g'),
N(name='c', value=None),
[N(name='d', value=None), N(name='e', value=None), N(name='f', value=None)]]
```

`print(s, end="\r")`

I settled on using `Ipython.display`

with a handle. The problem with the print approach is that it doesn’t work when the output is shorter than the previous line.
```
import time
import random
from tqdm.auto import tqdm
from IPython.display import display, Markdown
info_line = display(Markdown(''), display_id=True)
for x in tqdm(range(0,5), position = 0):
for y in tqdm(range(0,5), position = 1, leave=False):
x = random.randint(1, 10)
b = "Loading" + "." * x
info_line.update(Markdown(b))
time.sleep(0.5)
```

I’ve had a longstanding ambition to get a PCB manufactured but I’ve always put it off. Lately I had a need for a little adapter board to break out these 1.27mm spaced pins to 2.54mm pins that would fit into a breadboard. Feeling like it was a simple enough board I finally decided to fire up KiCad and give it a go.

The 1.27mm headers in question are on the back of this cute round lcd breakout board.

So I fired up KiCAD and got to work, I had used it a couple time before but had never gotten as far as turning it into a real PCB. Well that changes today!

I used this excellent KiCAD plugin to generate the necessary gerber files that I could upload directly to JLPCB. *Other fast cheap PCB manufactures exist, as you will know if you’ve ever watched an electronics themed youtube video. PCB manufacturers are to electronics YouTubers as mattresses peddlers are to podcasts.*

After getting the boards I soldering one up. Soldering the 1.27” header was surprisingly difficult to do without causing bridges. And those bridges were tough to remove once there. It didn’t help that I had run out off desoldering braid. Anyway I eventually got all the pins connected without overheating and delaminating the board.

Next I realised that I had made the obvious error: I put the 1.27” and 2.54” headers on the wrong sides from where the should go. The board isn’t reversible so that means the pin assignments are all wrong. By some miracle, the ground pins do have mirror symmetry so at least the ground plane is still the ground plane.

I had thought about trying to squeeze the pin assignments onto the silkscreen, thankfully I didn’t because this soldering mistake would have made those completely wrong.

I cloned the KiCAD file and swapped all the pin assignments around. Giving me this handy little cheat sheet.

]]>Mamba is hugely faster than conda. Use minimamba installed with brew.

Put this in the `~/.condarc`

(which mamba obeys too):

```
channel_priority: strict
channels:
- conda-forge
```

Note I’ve added `-y`

to these commands to skip the confirmation dialog.
Create env on command line: `mamba create -c conda-forge -n envname python=3.11 other_package ...`

Create env from file: `mamba env create -y -f file.yaml`

Remove env by name: `mamba env remove -y -n envname`

Export only manually installed packages to file: `mamba env export --from-history`

Create a `jupyter_env.yaml`

file (so that you can tear it down and rebuild it when everything explodes). Install that.

```
name: jupyter
channels:
- conda-forge
dependencies:
- python=3.11
- jupyterlab
- nb_conda_kernels # This makes conda envs visible to jupyterlab
- jupyterlab_widgets # Makes ipywidgets work in jupyterlab
```

Notes:
(making mamba kernels visible)[https://github.com/Anaconda-Platform/nb_conda_kernels]
(making ipywidgets work)[https://ipywidgets.readthedocs.io/en/latest/user_install.html#installing-the-jupyterlab-extension]
Can get a env yaml with `conda env export --from-history`

To make other environments visible to the jupyter lab instance and make ipqidgets work (i.e for tqdm progress bars) you need two extra packags:

```
name: child
channels:
- conda-forge
dependencies:
- python=3.11
- ipywidgets # The child to jupyterlab_widgets
- ipykernel # The child to nb_conda_kernels
```

In our new flat we have this mezzanine bed with a yellow ladder leading up to it. Between the ladder and the wardrobe we had this kind of triangular space that we wanted to use for more storage. After doing a quick design on paper I started mocking something up.

I got quite far with this version before realised I hade made a terrible error, somewhere on my scratch pad of calculations I had written something like “1700 - 36 = 1404”! After taking a few days to mourn the lost effort I decided to make a better plan to avoid making similar mistakes. I used the CAD model to generate a set of cutting plans that I could print out and take to the workshop.

In this new version I opted to make the sided panels out of solid sheets of 18mm pine plywood, it made the final object heavier but they were much easier to cut on the table saw. The extra weight was probably a good thing, in place it feels pleasantly heavy and sturdy.

]]>We see physical objects when photons from sources of light travel along some path, possibly bouncing and scattering along the way before entering our eyes and forming an image on our retinas.

Raytracing is a method for rendering images that does this in reverse, for each pixel in the image we shoot a light ray out of a point at a different angle. By calculating what that ray intersects with we can decide how to colour that pixel.

In non-relativistic settings these rays are just straight lines, maybe bouncing off surfaces sometimes, but in GR we need to calculate the full geodesic of the ray. By doing this we will be able to produce simple schematic images of the distortions created by a black hole.

In this problem set we're going to be working towards raycasting these schematic images of a Schwarzchild black hole.

The code will become a little involved as we work towards the payoff but I've tried to break it down into maneagble chunks.

If you get stuck, I've provided some hints with the idea that you should check them one at a time, and between each give yourself some time to try to arrive at the solution. It's likely that everyone will get a bit stuck at some point along the way so it's ok to sometimes take a peak at the answer to the part you're stuck on.

Once you're done with a question, check the answer before moving onto the next part.

The Schwarzchild metric is \(ds^2 = (1 - \tfrac{r_s}{r}) dt^2 - (1 - \tfrac{r_s}{r})^{-1} dr^2 - r^2 d\theta^2 - r^2 sin^2 \theta d\phi^2\)

Where $r,\phi,\theta$ are our spherical polar coordinates and $r_s$ is the radius of the event horizon which we will mostly set to 1 for what followds. However it's useful to leave it in the numerics because $r_s = 0$ corresponds to the flat spacetime. This is also a useful way to debug your code, when $r_s = 0$$r_s = 0$ all your geodesics should be straight and you can do normal geometry on them.

Prove that for a photon travelling through the Schwarzchild metric in the $\theta = \pi / 2$ plane, we can eliminate $t$ in the geodesic equation to arrive at a differential equation for $r(\phi)$:

\(\tfrac{dr}{d\phi}^2 = a r^4 - (1 - r_s/r)(b r^4 + r^2)\) for some constants $a$ and $b$.

The trick to do this without all the faff of calculating the Christoffel symbols is to use the Euler-Lagrange equations with the Lagrangian $L = g_{\mu\nu}\dot{x^\mu}\dot{x^\nu}$

\[\frac{dL}{dx^\mu} = \frac{d}{ds} \left(\frac{dL}{d\dot{x^\mu}}\right)\]where $\tfrac{d}{ds}$ and dots are both derivatives w.r.t some parameter $s$.

Notice that the metric doesn't depend $\phi$ or $t$ so the EL equations equations for those coords give use two conserved quantities which we'll label $e$ and $l$ because they turn out to be energy and angular momentum. \(\dot{t}(1 - r_s/r) = e\) \(\dot{\phi}r^2 = l\)

Plugging these into the EL equation for $r$ with the goal of getting rid of all occurences of $t$ we eventually arrive at: \(\left(\frac{dr}{d\phi}\right)^2 = \frac{e^2 r^2}{l^2} - (1 - \frac{r_s}{r})(\frac{r^4}{l^2} + r^2)\)

Check for youreslf that you can tranform this with $u = 1/r$ to get

\[\left(\frac{du}{d\phi}\right)^2 = \frac{e^2}{l^2} - (1 - r_s u)(\frac{1}{l^2 u^4} + \frac{1}{u^2})\]The massless limit here turns out to be $a \rightarrow \infty$ so we get: \(\left(\frac{du}{d\phi}\right)^2 = \frac{e^2}{l^2} - \frac{1-r_s u}{u^2}\)

Now there's one final detail before we start coding, this equation contains a square of a derivative which is a little annoying to work with numerically, solving it directly would require writing some quite low level numerical routines, instead what we'll do is convert this to a second order differential equation by taking another derivative w.r.t $\phi$. This is very similar to the relationship between $f = m\ddot{x}$ and $1/2 m \dot{x}^2 + V(x) = 0$. This gives us the equation we will actually be treating numerically:

\[\ddot{u} = -u (1 - \tfrac{3}{2} r_s u)\]i) Recall that a 2nd order differential equation contains only derivatives like $\ddot{u}$ and $\dot{u}$ while a first order diff. eqn contains only terms like $\dot{u}$. Show that $\ddot{u} = -u (1 - \tfrac{3}{2} r_s u)$ can be written as two coupled first order equations by introducing a second variable.

ii) Read the documentation for `scipy.integrate.solve_ivp`

, can you figure out how part i) helps us to use `solve_ivp`

on our problem?

i) Introduce a new variable $v = \dot{u}$ so $\dot{v} = \ddot{u}$, this seems a little too obvious but now we have two first order equations! \(v = \dot{u}\) \(\dot{v} = -u (1 - \tfrac{3}{2} r_s u)\)

ii) From the docs for `solve_ivp`

we see that it can integrate equations of the form $\tfrac{dy}{dt} = f(t, y)$ with initial conditions $y(t_0) = y_0$ but crucially y can be of any dimension, so the trick is to write out coupled equations in a vector form, if we define: \(\vec{y} = (y_0, y_1) = (\dot{u}, u)\) then we can write: \(\dot{y} = (\ddot{u}, \dot{u}) = (-y_1(1 - 3/2\; r_s y_1), y_0)\)

Which is something we can integrate with `solve_ivp`

i) First write a function `geodesic(u, udot, phi_max)`

using that takes initial conditions $(u_0, \dot{u}_0)$ and returns a trajectory $u(\phi)$ represented by a numpy array of $\phi$ values and one of $u$ values with $\phi$ between 0 and `phi_max`

.

ii) Now write a function `phi_u_to_xy(phi, u)`

that transforms from $(u, \phi)$ coordinates to (x,y) coordinates.

iii) Plot a trajectory in $x,y$ space starting from $u = 2/3, \dot{u} = 2/9$ for phi. If all goes well you'll get a somewhat polygonal looking trajectory starting at $(x,y) = (1,0)$ and ending at $(0,0)$.

iv) Reduce the maximum step size to something more reasonable like 0.2 to get a smoother plot.

- Use scipy.integrate.solve_ivp
- Read the docs for scipy.integrate.solve_ivp
- Note that solve_ivp solves the equation dy/dt = f(y,t) where y may be an vector.

```
import numpy as np
from scipy.integrate import solve_ivp
from matplotlib import pyplot as plt
from math import pi
def geodesic(u, udot, phi_max, r_s = 1, max_step = 0.2):
"""
Integrates f(phi, y = (udot(phi), u(phi))) given inital data u and udot. For phi in (0, phi_max).
The stepsize is variable but will not be larger than max_step so this can be used to get smoother plots of the trajectory.
Returns phi, u such that (u[i], phi[i]) is the ith point along the trajectory. The number of points returned depends on the initial conditions, step size and stopping criteria.
"""
def f(phi, y): return np.array([-y[1]*(1 - 3/2 * r_s * y[1]), y[0]])
o = solve_ivp(
fun = f,
t_span = (0, phi_max),
y0 = np.array([udot, u]),
max_step = max_step,
)
return o.t, o.y[1]
def phi_u_to_xy(phi, u):
r = 1/u
return r*np.cos(phi), r*np.sin(phi)
phi, u = geodesic(u = 2/3, udot = 2/9, phi_max = 7)
x, y = phi_u_to_xy(phi, u)
f, ax = plt.subplots(figsize = (10,10))
ax.plot(x,y);
s = 1.5
ax.set(xlim = (-s,s), ylim = (-s,s));
```

i) Note that the point $u = 0$ corresponds to $r = \infty$, so it makes sense to stop the simulation at $u = 0$ because physically it means the photon has completely escaped the black hole and has shot off into space. Search the docs for `solve_ivp`

for a way to implement this early stopping. If you don't do this you will notice spurious solutions later where $u$ has gone past 0 to small negative values. You'll notice that you actually get some spurious solutions anway when u gets very small so instead of stopping the simulation at $u = 0$ stop it at some large value like $r = 100$ (Don't forget $r = 1/u$).

ii) Write a function `r_rdot_to_u_udot(r, rdot)`

that converts from $(r, \dot{r})$ to $(u, \dot{u})$

iii) Use the above to plot geodesics with the initial values $r = 3/2$ and $\dot{r}$ ranging between $-1$ and $0.1$

iv) Do another for $r = 3$ and $\dot{r}$ ranging between $-3$ and $-1$ but feel free to play around with the values.

Once you get the plots from iii you'll see that the photons either fall into the singularity or escape to infinity depending on their initial conditions, the two regimes are separated by an unstable circular orbit called the photon sphere that lies at $r = \tfrac{3}{2}r_s$

Figuring out how to implement stop conditions in `solve_ivp`

is a little odd but at least there's some example code in the documentation. Part ii) requires differentiating $r = 1/u$ to get the relationship between $\dot{r}$ and $\dot{u}$

```
#Tell the solver to stop when u = 0
#The solver triggers an event when stop_condition(t,y) == 0
#and because stop_condition(t, y).terminal == True it takes this as a signal to stop the simulation
def stop_condition(t, y): return y[1] - 1/100
stop_condition.terminal = True
def geodesic(u, udot, phi_max, r_s = 1, max_step = 0.2):
"""
Integrates f(phi, y = (udot(phi), u(phi))) given inital data u and udot. For phi in (0, phi_max).
The stepsize is variable but will not be larger than max_step so this can be used to get smoother plots of the trajectory.
Returns phi, u such that (u[i], phi[i]) is the ith point along the trajectory. The number of points returned depends on the initial conditions, step size and stopping criteria.
Stops if r grows larger than 100 * r_s, i.e the ray is going to infinity.
"""
def f(phi, y): return np.array([-y[1]*(1 - 3/2 * r_s * y[1]), y[0]])
o = solve_ivp(
fun = f,
t_span = (0, phi_max),
y0 = np.array([udot, u]),
max_step = max_step,
events = stop_condition,
)
return o.t, o.y[1]
def phi_u_to_xy(phi, u):
r = 1/u
return r*np.cos(phi), r*np.sin(phi)
def r_rdot_to_u_udot(r, rdot):
udot = - rdot / r**2
u = 1/r
return u, udot
r = 1.5 #The radius to shoot the rays from
s = 3 #The radius to include in the plot
fig, ax = plt.subplots(figsize = (10,10))
ax.set(xlim = (-s,s), ylim = (-s,s))
for rdot in np.linspace(-1,0.1,20):
u, udot = r_rdot_to_u_udot(r, rdot)
phi, u = geodesic(u, udot, phi_max = 10)
x, y = phi_u_to_xy(phi, u)
ax.plot(x,y, color = 'orange')
#ax.scatter(x,y)
#Show important regions
phi = np.linspace(0,2*np.pi,100)
ax.plot(1*np.cos(phi), 1*np.sin(phi), linestyle = '-', color = 'black', label = "Schwarzchild Radius")
ax.plot(1.5*np.cos(phi), 1.5*np.sin(phi), linestyle = '--', color = 'black', label = "Photon Sphere")
ax.legend();
```

```
r = 3 #The radius to shoot the rays from
s = 5 #The radius to include in the plot
fig, ax = plt.subplots(figsize = (10,10))
ax.set(xlim = (-s,s), ylim = (-s,s))
for rdot in np.linspace(-3,-1,20):
u, udot = r_rdot_to_u_udot(r, rdot)
phi, u = geodesic(u, udot, phi_max = 10)
x, y = phi_u_to_xy(phi, u)
ax.plot(x,y, color = 'orange')
#ax.scatter(x,y)
#Show important regions
phi = np.linspace(0,2*np.pi,100)
ax.plot(1*np.cos(phi), 1*np.sin(phi), linestyle = '-', color = 'black', label = "Schwarzchild Radius")
ax.plot(1.5*np.cos(phi), 1.5*np.sin(phi), linestyle = '--', color = 'black', label = "Photon Sphere")
ax.legend()
```

```
<matplotlib.legend.Legend at 0x7f94be9ff110>
```

i) We started off with $(\dot{u}, u)$ initial conditions and then moved to using $(\dot{r}, r)$. Now define $(\alpha, r)$ where $\alpha$ is the angle between the rays tangent and the horizontal.

ii) What we're working towards is to determine if each ray goes to infinity or hits the event horizon and where. We're going to do this by taking advantage of `solve_ivp`

's ability to tell us about events, we've already used it to stop the simulation at $u = 0$. So let's define another event that fires if the ray hits the event horizon at u = 1, we'll then be able to classify rays into those that escaped and those that were captured by the black hole.

NB: if you pass `events = [f, g]`

into `solve_ivp`

then the solution will have a field `t_events`

where `t_events[0]`

contains t values at which $f(t,y) = 0$ and `t_events[1]`

at which $g(t,y) = 0$

Plot rays for $r = 5$ and $\alpha$ between 0.01 and $\pi$, notice that the rays are evenly spread out now when they emanate from the observer. Use the events to color the rays red or blue depending on if the ray escaped to infinity or hit the horizon.

You'll see that the lines actually overshoot the event horizon and go a little inside, don't worry about this, it's just because of the finite step size. The events actually have more accurate positions than the steps themselves because `solve_ivp`

uses numerical root finding under the hood to estimate where $f(t,y) = 0$ even if it happens between steps.

i)

ii)

```
def escape(t, y): return y[1] - 1/100
escape.terminal = True
def horizon(t, y): return y[1] - 1
horizon.terminal = True
def geodesic(u, udot, phi_max, r_s = 1, max_step = 0.1):
"""
Integrates f(phi, y = (udot(phi), u(phi))) given inital data u and udot. For phi in (0, phi_max).
The stepsize is variable but will not be larger than max_step so this can be used to get smoother plots of the trajectory.
Returns phi, u, o such that (u[i], phi[i]) is the ith point along the trajectory.
o is the full result object returned by solve_ivp which contains, among other things, information about events that occured on the trajectory.
The number of points returned depends on the initial conditions, step size and stopping criteria.
Stops if r grows larger than 100 * r_s, i.e the ray is going to infinity
or if r < 1 in which case the ray has crossed the event horizon.
"""
def f(phi, y): return np.array([-y[1]*(1 - 3/2 * r_s * y[1]), y[0]])
o = solve_ivp(
fun = f,
t_span = (0, phi_max),
y0 = np.array([udot, u]),
max_step = max_step,
events = [escape, horizon],
)
return o.t, o.y[1], o
fig, axes = plt.subplots(ncols = 2, figsize = (14,7))
r = 5
s = 5
for r_s, ax in zip([0,1],axes):
ax.set(xlim = (-s,s), ylim = (-s,s))
for alpha in np.linspace(0.01, pi/2 ,50):
rdot = -r/np.tan(alpha)
u, udot = r_rdot_to_u_udot(r, rdot)
phi, u, o = geodesic(u, udot, phi_max = 10, r_s = r_s)
x, y = phi_u_to_xy(phi, u)
color = "blue" if len(o.t_events[0]) > 0 else "red"
ax.plot(x,y, color = color)
phi = np.linspace(0,2*np.pi,100)
ax.plot(1*np.cos(phi), 1*np.sin(phi), linestyle = '-', color = 'black', label = "Schwarzchild Radius")
ax.legend()
axes[0].set(title = "Flat Spacetime")
axes[1].set(title = "Schwarzchild Spacetime");
```

At this point we should stop and think about what this is. We're calculating geodesics emanating from an observer at some point outside the horizon. If we want to interpret what this means about how a black hole looks we have to be careful:

1) When doing raytracing we're assuming the geodesics are reversible, that is: light could follow a path from the surface of the horizon to the observer. You'll have to take my word that this is true in this case, though you can easily see that it isn't true for points inside the event horizon. 2) The other thing to note is that when matter falls into a black hole, light that it emits will appear more and more redshifted until it's essentially invisible. We're not going to account for that here.

That being said, the above plots tell us two things about what black hole look like:

1) The horizon appears larger than it actually is. 2) We are able to see light that is emitted from the back, side and actually all the way around the hole. If you add more rays you'll see there's no limit to how many loops the light rays can make, it just requires more and more tuning of the angle. This means that at the edge of our image of the hole we're going to see a lot of copies of the hole all smushed together.

Now we're going to move out of this 2D plane into 3D, let's define $\alpha$ as before and also introduce $\beta$ which will measure rotation about the line between the observer and the origin and $\gamma$ which will be the value of $\phi$ when the ray intersected the event horizon.

We'll cheat a bit, because of the high symmetry of the (non-rotating) black hole and the fact we're looking at it axially, all we really need to know is $\gamma(\alpha)$, $\beta$ just rotates everything. The rest is just coordinate transorms, incredible tedious ones at that.

Use your geodesic code to make a lookup table for $\gamma(\alpha)$ with $\alpha$ between 0.01 and $\pi$ for both Schwarzchild $r_s = 1$ and flat $r_s = 0$ spacetime. In principle we could also collect information about the escaped rays but I'll leave that as an exercise for the reader.

```
r_obs = 3
def compute_interpolation(interp_alphas, r_obs, r_s):
interp_gamma = np.full(shape = len(interp_alphas), fill_value = np.NaN)
for i, alpha in enumerate(interp_alphas):
rdot = -r_obs/np.tan(alpha)
u, udot = r_rdot_to_u_udot(r_obs, rdot)
phi, u, o = geodesic(u, udot, phi_max = 10, max_step = np.inf, r_s = r_s)
#if the ray doesn't hit the horizon, stop.
if len(o.t_events[1]) == 0: break
#otherwisee save the gamma where it hit the horizon.
interp_gamma[i] = o.t_events[1][0]
#we stopped computing at i, so cut everthing else off
return interp_alphas[:i], interp_gamma[:i]
fig, axes = plt.subplots(ncols = 2, figsize = (16,8))
for i, r_obs in enumerate([70, 1.5]):
interp_alphas = np.linspace(0.001,pi/2, 1000) # the range of alpha that we will use
schwarzchild_alphas, schwarzchild_gammas = compute_interpolation(interp_alphas, r_obs, r_s = 1)
flat_alphas, flat_gammas = compute_interpolation(interp_alphas, r_obs, r_s = 0)
ax = axes[i]
ax.plot(schwarzchild_alphas, schwarzchild_gammas, label = "Schwarzhild")
ax.plot(flat_alphas, flat_gammas, label = "Flat Spacetime")
ax.axvline(x = flat_alphas[-1], linestyle = '--', label = "Max alpha in flat space")
ax.axhline(y = flat_gammas[-1], linestyle = '-.', label = "Max gamma in flat space")
ax.legend()
ax.set(ylabel = "gamma", xlabel = "alpha")
axes[1].set(xlim = (0, 1), ylim = (0, 7), title = "Very close to the horizon, r_obs = 3")
axes[0].set(xlim = (0, 0.07), ylim = (0, 7), title = "Far from the horizon, r_obs = 30");
```

From the above we see that the functions are similar when the rays are travelling almost directly towards the centre of the hole but the black hole is visible for a much larger range of $\alpha$ than it would be in flat spacetime (If it were just a sphere rather than a gravitationally massive body).

Now we'll put together the funtion we computed in qustion 6 to render an image of the surface of a black hole as viewed at some distance. To give us some reference we're going to texture the surface of the event horizon with an image of the earth surface, this is terrible unphysical since the earth is not a black hole, it's not emissive, etc etc but it's more interesting that just plotting lines of constant $\phi$ and $\theta$.

We'll take the image below as our convention for how to put coordinates on a sphere though I'll be using radians instead of degrees in the code.

This code is not that interesting to write so I'll just give it to you, we load in an image (which you need to download from here) and then use an interpolater to get a function that maps lat,lon points to colors `earth([(lat0,lon0), (lat1,lon1)...]) -> [(r0, g0, b0), (r1, g1, b1)...]`

```
from matplotlib import image
import scipy.interpolate
#get this from https://upload.wikimedia.org/wikipedia/commons/thumb/2/23/Blue_Marble_2002.png/2560px-Blue_Marble_2002.png
im = image.imread("./Blue_Marble_2002.png")
print(im.shape)
lat = np.linspace(-pi/2, pi/2, im.shape[0])
lon = np.linspace(-pi, pi, im.shape[1])
latv, lonv = np.meshgrid(lat, lon, sparse=False, indexing='ij')
earth_interp = scipy.interpolate.RegularGridInterpolator([lat, lon], im, bounds_error=False)
def to_points(xv, yv): return np.array([xv, yv]).transpose((1,2,0))
def wrap_lon(phase): return (phase + pi) % (2 * pi) - pi #wraps to the interval -pi, pi
def wrap_lat(phase): return (phase + pi/2) % (pi) - pi / 2 #wraps to the interval -pi/2, pi/2
def earth(lat, lon):
points = to_points(wrap_lat(lat), wrap_lon(lon))
return np.clip(earth_interp(points), 0.0, 1.0)
fig, axes = plt.subplots(figsize = (20,20))
plt.imshow(earth(latv, lonv))
```

```
(1280, 2560, 3)
<matplotlib.image.AxesImage at 0x7f94be82ad10>
```

I'm going to walk you through this a little bit because the process of rendering this image is a little fidly and look me a while to get right.

First we want to define the bounds of our image, for want of better names I'll call the coordinates x and y but really they measure the angles of rays hitting our observers eye. We'll choose $2\pi/3$ as the persons maximum field of view.

```
fov = 2*pi/3 #field of vieew
#define coordinates for our image window
x = np.linspace(-fov/2, fov/2, 500)
y = np.linspace(-fov/2, fov/2, 500)
```

Next we define a 'mesh' this means that while x and y are just arrays [x0, x1 ...] , xv and yv have two dimensions such that `(xv[i,j], yv[i,j])`

is the coordinate of the (i,j) grid point. This allows us to do fast numpy things like `r = np.sqrt(xv**2 + yv**2)`

which gives us a variable defined over the image plane equal to the distance from the origin.

```
xv, yv = np.meshgrid(x, y, sparse=False, indexing='ij')
```

Next we define r and $\phi$ in the image plane. If you look at the diagram from question 6 you should be able to convinve yourself that it makes sense to define $\alpha = r$ and $\beta = \phi$

```
r = np.sqrt(xv**2 + yv**2) #ranges from 0 to pi/3
phi = np.arctan2(xv, yv) #ranges from -pi to pi
#we map alpha as define above onto r and beta onto phi
alpha = r
beta = phi
```

Now we actually map alpha onto gamma, here `spacetime`

is a function you need to write that maps alpha onto gamma according to the spacetime you're in. You want to interpolate the data you got from question 6. The `pi/2 - spacetime(alpha)`

is because we're measuring our azimuthal angles from the equator rather than from the poles.

```
def spacetime(x): return x #a dummy function that you need to fill in
gamma = pi/2 - spacetime(alpha)
beta = beta
```

And finally, just so that we can view the earth from the side rather than only at one of the poles, we'll map to 3D cartesian coordinates, rotate the axes and then map back. Now$(\phi, \theta)$ are the longitude and lattitude on the earth so we can map straight to an image pixel and we're done!

```
def polar_to_cart(r, phi, theta):
"""
Convert a spherical polar coordinate system
(r, phi theta) 0 < r, -pi < phi < pi, -pi/2 < theta < pi/2
into Cartesian x,y,z coordinates
NB:
Phi measures rotations about the z axis
Theta is the angle above or below the xy plane
This is slightly different from the typical definition where theta is the angle with the z axis.
"""
return r * np.cos(phi) * np.cos(theta), r * np.sin(phi)*np.cos(theta), r * np.sin(theta)
def cart_to_polar(x,y,z):
"""
Reverse the above transformation.
Uses the convention that phi ranges from -pi to pi
and theta ranges from -pi/2 to pi
"""
r = np.sqrt(x**2 + y**2 + z**2)
theta = np.arcsin(z / r)
phi = np.arctan2(y,x)
return r, phi, theta
x,y,z = polar_to_cart(r = 1, phi = beta, theta = gamma)
_, phi, theta = cart_to_polar(x,-z,y)
```

Can you put all that together to make an image of the surface of the earth if the surface of the earth also happend to be the event horizon of a black hole? (Terms and conditions apply, this can't actually happen, gravitational redshift is ignored etc etc)

I wouldn't blame you for skipping to the solution on this one, it's a bit tricky but the payoff is nice.

```
r_obs = 3
interp_alphas = np.linspace(0.001,pi/2, 1000) # the range of alpha that we will use
schwarzchild_alphas, schwarzchild_gammas = compute_interpolation(interp_alphas, r_obs, r_s = 1)
flat_alphas, flat_gammas = compute_interpolation(interp_alphas, r_obs, r_s = 0)
#these two functions interpolate the gamma(alpha) functions we calculated earlier
#right = np.NaN tells them to return NaN if alpha is too large and the ray doesn't hit.
def schwarzchild_gamma(alpha):
shape = alpha.shape
return np.interp(alpha.flatten(), schwarzchild_alphas, schwarzchild_gammas, right = np.NaN).reshape(shape)
def flat_gamma(alpha):
shape = alpha.shape
return np.interp(alpha.flatten(), flat_alphas, flat_gammas, right = np.NaN).reshape(shape)
fig, rows = plt.subplots(nrows = 2, ncols = 3, figsize = (15,10))
for axes, name, spacetime in zip(rows, ["Flat", "Schwarzchild"],[flat_gamma, schwarzchild_gamma]):
fov = 2*pi/3 #field of vieew
#define coordinates for our image window
x = np.linspace(-fov/2, fov/2, 500)
y = np.linspace(-fov/2, fov/2, 500)
#make them into grids
xv, yv = np.meshgrid(x, y, sparse=False, indexing='ij')
#define r and phi for our image window
r = np.sqrt(xv**2 + yv**2) #ranges from 0 to pi/3
phi = np.arctan2(xv, yv) #ranges from -pi to pi
#we map alpha as define above onto r and beta onto phi
alpha = r
beta = phi
gamma = pi/2 - spacetime(alpha)
beta = beta
x,y,z = polar_to_cart(r = 1, phi = beta, theta = gamma)
_, phi, theta = cart_to_polar(x,-z,y)
axes[0].imshow(earth(gamma, beta))
axes[1].imshow(earth(-gamma, beta))
axes[2].imshow(earth(theta, phi))
for a in axes:
a.axis('off')
a.set(title = name)
```

```
/Users/tom/miniconda3/lib/python3.7/site-packages/ipykernel_launcher.py:17: RuntimeWarning: invalid value encountered in remainder
/Users/tom/miniconda3/lib/python3.7/site-packages/ipykernel_launcher.py:16: RuntimeWarning: invalid value encountered in remainder
app.launch_new_instance()
```

In the end we’ll end up with this:

We could easily add buttons, extra inputs and outputs etc. If you’re working on a project where much of the work consists of configuring the hardware correctly then the effort of making a simulation like this is probably not worth it. For projects like the sensor watch, however, where the inputs and outputs are pretty much fixed while lots of people will want to modify the software it makes a lot of sense.

The sensor watch project has a firmware with a clearly defined interface and the trick is that you swap out the implementation of this interface between the real hardware and the simulation. I wanted to get a better understanding of this so I thought I’d so a super simple version myself, let’s do that classic arduino project… blinky!

Let’s grab the code for blinky.ino, we could easily compile this for a real arduino using the IDE or using a makefile. I’m gonna skip the details of getting this working for both real hardware and for emscripten to keep it simple.^{1} The starting point is to try to compile a simple arduino sketch like this one:

```
#include <Arduino.h>
void setup() {
Serial.println("Setting pinMode of pin 13");
pinMode(LED_BUILTIN, OUTPUT);
}
void loop() {
Serial.println("LED On");
digitalWrite(LED_BUILTIN, HIGH);
delay(1000);
Serial.println("LED Off");
digitalWrite(LED_BUILTIN, LOW);
delay(1000);
}
```

In order to get this to compile using emscripten we need to do two things:

- write a wrapper script that runs
`setup()`

and then calls`loop()`

in an infinite loop - provide implementations of functions like
`digitalWrite`

and`delay`

that interface with emscripten.

The first bit is pretty easy:

```
#include "blink.c"
int main() {
setup();
while(1) loop();
}
```

It’s not typical to include a .c file like this. To avoid having to recompile everything all the time, you’re really supposed to just include the .h file, compile .c files to .o object files and then link them all together at the end. For this small example it’s fine though.

Ok, now let’s write a quick implementation of `Arduino.h`

that just covers what we need for this example, I’ll include the code in the header file for brevity. First we have some includes. We need `stdint`

for the fixed width integer types like `uint8_t`

, stdio for `printf`

which emscripten provides its own implementation of and `<emscripten.h>`

to embed javascript implementations. We also have some simple definitions that appear in our sketch.

```
#include <stdint.h>
#include <stdio.h>
#include <emscripten.h>
#include <emscripten/html5.h>
#define HIGH 1
#define LOW 0
#define LED_BUILTIN 13
#define OUTPUT 1
#define INPUT 0
```

Next we do `digitalWrite`

. We use the `EM_ASM`

macro which just pastes that JS code into the compiled wasm, substiting in the `value`

for `$0`

. I grabbed a creative commons licensed svg off wikipedia, added a little overlay in inkscape with the id `light_led`

and then we toggle it’s opacity.

```
void pinMode(uint8_t pin, uint8_t value) {}
void digitalWrite(uint8_t pin, uint8_t value) {
if(pin == 13) {
EM_ASM({
document.getElementById("light_led").style.opacity = $0 ? 1 : 0;
}, value);
}
}
```

For `delay`

, it’s a bit more complicated because the JS that runs in browsers has to share the CPU, it can’t delay by just wasting cycles the way we can on an arduino. So we use the asyncify feature of emscripten. This gives us a function `emscripten_sleep`

that effectively yields to other code running on the page for a certain period of time, this allows us to implement `delay`

in a non-blocking way.

```
void delay(uint32_t milliseconds) {
emscripten_sleep(milliseconds);
}
```

Finally, `Serial.println`

should be pretty easy, we just call `printf`

. However we need to do something to mimic to the `Serial.print`

syntax which involves a little C++:

```
class SerialClass {
public:
void begin(uint32_t baud) {}
void println(const char string[]) {
printf("%s\n", string);
}
};
SerialClass Serial;
```

And with that we’re almost done! We have three files, `blink.c`

that represents our arduino sketch, `main.c`

that wraps it and `Arduino.h`

that implements the lower level stuff. To compile it we need the emscripten C++ compiler `em++`

```
em++ -sASYNCIFY -O3 main.c -o build/main.js -I./
```

`-sASYNCIFY`

tells emscripten that it should us asyncify, `-O3`

runs optimisations as recommended when using asyncify and `-I./`

tells the compiler that `Arduino.h`

can be found in the same directory as `main.c`

. We get two files as output `main.js`

and `main.wasm`

. The former is another wrapper script and `main.wasm`

contains the actual compiled webassembly code.

So how do we use `main.js`

and `main.wasm`

? We need to include `main.js`

and some extra glue code `loader.js`

on our HTML page, along with our SVG and a textarea tag that the serial output will go to:

```
<figure>
<svg>...</svg>
Serial Console
<textarea class="emscripten" id="output" rows="8"
style="width: 80%; display: block; margin: 1em;"></textarea>
<figcaption>
The finished arduino simulation.
</figcaption>
</figure>
<script async type="text/javascript" src="loader.js"></script>
<script async type="text/javascript" src="main.js"></script>
```

And that gives us the final result. I’ve put all the files in this repository.

]]>There is a firmware called Movement that already supports most of the things you probably want a watch to do, uses very little power and exposes a nice interface for writing extensions.

To compile it you need the the ARMmbed toolchain and if you want to test the firmware in a javascript simulator (you do!) then you also need emscriptem:

```
# first make sure you've activated emscripten
# in the current shell, see emscripten docs
# for me this means running
source ~/git/emsdk/emsdk_env.sh
cd ~/git/Sensor-Watch/movement/make
# emmake takes a normal makefile for a C project
# and compiles it to JS instead
emmake make
# Serve watch.html locally
python3 -m http.server 8000 -d build
```

The simulator itself is an adapted version of this lovely simulation of the original watch firmware for the sensorwatch project. The contents of watch.html is basically an svg of the watchface, some glue code and the watch firmware in watch.wasm. I factored out the inline svg and glue code to end up with a snippet that I could embed in this page:

```
<figure>
{% include watch.svg %}
<!-- change display from none to inline to see the debug output -->
<textarea id="output" rows="8" style="width: 100%; display: none;"></textarea>
<figcaption>
Click the buttons to interact with my watch firmware!
</figcaption>
</figure>
<script async type="text/javascript" src="/assets/blog/SensorWatch/emulator.js"></script>
<script async type="text/javascript" src="/assets/blog/SensorWatch/watch.js"></script>
```

Which I can update by re-running emmake and copying over watch.js and watch.wasm:

```
emmake make && \
cp ./build/watch.wasm ./build/watch.js ~/git/tomhodson.github.com/assets/blog/SensorWatch
```

I noticed that there wasn’t support for simulating the bicolor red/green led on the sensorwatch board so I made a quick PR to fix that. Next I want to try adding my own new watch face.

I have yet to do this!

How hard could that be? It turns out harder than I expected!

**Disclaimer:** The web is an evolving thing. Depending on when you read this some of what I saw here may already be out of date, or the workarounds needed might be fixed on some browsers. This is your reminder to check the date of this post before trying to copy anything in here, something I forget to do often.

The first hurdle is how you embed your SVG files in your HTML. For HTML version of the thesis I’ve been using `img`

tags inside `figure`

tags like this

```
<figure>
<img src="/path/to/image.svg"/>
<figcaption>
Figure 3: Caption goes here!
</figcaption>
</figure>
```

I like this setup. It uses semantic HTML tags which give useful hints to screen readers about how to interpret this content non-visually. The problem here is the `img`

tag. Embedding SVGs this way will display them nicely but the SVG elements won’t be available to an JS running on the page. This is because when the browser sees an SVG loaded through an `img`

and renders it as a static image. This also means you can’t select the text or other elements in the SVG!

So what’s the alternative? Well if you google how to embed svgs you’ll see that you have a few options: `object`

tags, `svg`

tags, `iframe`

tags etc. I had a play around with a few of these options but because I am generating my HTML from markdown via pandoc, it’s a little tricky to use entirely custom HTML. The best option for interactivity seems to be embedd the svg directly into the HTML in an `svg`

tag. I don’t like this so much because it fills my nice HTML files up with hundreds of lines of SVG and means it’s not so easy to edit them in inkscape with a tedious copy paste step.

In other pages on this blog I solved this using Jekyll. Jekyll is a static site generator and it’s easy to tell it to take the contents of a file like `myimage.svg`

and dump the contents into the HTML at compile time.

For the thesis however I’m using pandoc and targeting both HTML and latex. In principle I could have written a pandoc filter to replace `img`

tags that link to `.svg`

files with raw SVG but I didn’t want to add any more complexity to that build system just for a small easter egg. I didn’t even need to do this for all teh SVGs, just the ones I wanted to add interactivity too.

Instead I chose to stick with the `img`

tags but use some JS to dynamically replace them with `svg`

tags when I wanted to add interactivity. I query for the image I want, use `d3.xml`

to download the content of the `src`

attribute, and then replace the `img`

with the constructed `svg`

tag.

```
//grab the img tag containing our target svg
const img = document.querySelector("img#id-of-image");
if(img !== null) {
d3.xml(img.getAttribute('src')) //download the svg
.then(data => {
const svg = data.documentElement;
svg.setAttribute('width', '100%');
svg.setAttribute('height', 'auto');
d3.select(img).node().replaceWith(svg);
});
}
```

My target image looks like this

This diagram represents a model of a quantum system of *spins* and *fermions*. The spins are the little arrows which can either be up or down and the fermions are the little circles which can either be filled or unfilled. I want to make both of them switch states when you click them.

First we need a way to select the fermions with d3, this is where the xml editor in inkscape comes in. With `Edit \> XML Editor`

you can add attributes to any SVG element using the little “+” icon. I used this to add “class : fermion” to each of the fermion circles.

Now to animate them with d3, after a false start involving trying to figure out how to reliably compare colours in d3 I switched to using opacity and ended with with this code:

```
const fermions = d3.select(svg).selectAll(".fermion");
fermions.on("click", function() {
d3.select(this)
.transition()
.duration(100)
.style("fill-opacity" , d => {return d3.select(this).style("fill-opacity") === '1' ? 0 : 1});
}, true)
```

The trick here is that in d3 you can set attributes with a function and use `d3.select(this)`

to get a handle on the current element. You can then do a query on the current value of the style and change it accordingly. Originally I had wanted to switch the fill colour between black and white but try as I might I could not find a way to reliably compare two colours together.

Same deal for the spins, add a `class: spin`

to them all using the XML editor.

I had originally wanted them to animate a nice rotation, but I couldn’t find an eay way to compute the geometric centre of each spin to rotate about. I had a go with `transform-origin: centre`

but couldn’t get it to work.

So I use a different hack, I switched when end of the line the arrow head is on:

```
const spins = d3.select(svg).selectAll(".spin");
spins.attr("pointer-events", "all"); //this is the subject of the next paragraph!
spins.on("click", function() {
const start = d3.select(this).select("path").style("marker-start");
const end = d3.select(this).select("path").style("marker-end");
const direction = (start === "none");
const url = direction ? end : start;
d3.select(this).select("path")
.transition()
.duration(100)
.style("marker-start", () => {return direction ? url : "none"})
.style("marker-end", () => {return direction ? "none" : url})
}, true)
```

After this, I could make the spins flip but only if I clicked in a very tiny area near each spin. It turns out that this is because the default way for SVG elements to determine if you’ve clicked on them is a bit conservative. Adding `spins.attr("pointer-events", "all");`

fixes this.

Finally we end up with this:

You can also see it in context in the introduction to my thesis.

]]>For a while I’ve wanted to be able to build my Overleaf projects locally so that I can work on them when the internet is poor. Well I finally figured out to how to do it!

Instructions here, it’s worth getting the version with all the packages because you’ll likely need a bunch and they’re a pain to install one by one.

Make sure you have the tex live package manager `tlmgr`

which I’m pretty sure is installed with the latex.

Update tlmgr, depending on how it’s installed `tlmgr`

may or may not need root permissions, mine does.

```
sudo tlmgr update --self #update tlmgr because it always complains
```

Overleaf uses latexmk to manage compilation so you need that. And if you’re like me and you only installed the light version of texlive above then you’ll likely need a bunch of extra packages for your target overlead project, so install `texliveonfly`

which we’ll use later to autoinstall the packages.

```
sudo tlmgr install latexmk texliveonfly
```

You can either clone your overleaf project directly with

```
git clone $overleaf_project_link
```

or create a linked github repo from the settings tab of Overleaf and clone that.

Now cd into your newly cloned repo and use `texliveonfly`

to install the packages that your project depends on by running `sudo texliveonfly`

on your main tex file.

```
sudo texliveonfly main.tex
```

The actual compilation is done with `latexmk`

:

```
latexmk -pdf -shell-escape main.tex
```

I had to add the `-shell-escape`

option because I was using a package (latexmarkdown) that requires running external commands.

I use miniconda3 with the package directory set up to point to somewhere other than the home directory because the university system I’m working on only gives a small amount of space in your home directory.

My ~/.condarc contains:

```
channel_priority: strict
channels:
- conda-forge
- defaults
```

channel_priority: strict is supposed to speed up conda complicated dependency resolution system, and I heard on the grapevine that conda-forge packages tend to be more up to date than the ones in default so I’ve switched the priority.

I then have a “base” environment where I’ve install jupyterlab,

Hint: you can use `conda env export --from-history`

to see the packages you actually installed into a conda environment rather than the ones that were installed as dependencies, I wish there were a shorter form for this command because I think it’s really useful.

$(base) conda env export –from-history name: base channels:

- conda-forge
- defaults dependencies:
- python=3.8
- jupyterlab
- jupyterlab-git
- nb_conda_kernels

jupyterlab is the package that runs the show, we also have jupyterlab-git which is a git plugin for it and nb_conda_kernels which allows jupyter in the base env to see the other environments on the system.

Now I have project specific envs where I actually run python code. Here’s an example one:

name: fk channels:

- conda-forge
- defaults dependencies:
- python=3.8
- scipy
- numpy
- matplotlib
- munch prefix: /home/tch14/miniconda3/envs/fk

I often use my laptop to access a Jupyterlab server running on a beefier machine under my desk at work, the ~/.shh/config on my laptop contains

```
host beefyremote
user [username]
hostname [hostname]
proxyJump [a machine that you need to jump through if necessary]
IdentityFile ~/.ssh/id_rsa_password_protected
#give access to the jupyter server on [beefyremote]:8888 to localhost:8889
LocalForward 8889 localhost:8888
#LocalCommand
```

I open this connection and then run `jupyter lab --no-browser --port=8888`

within a tmux session on the remote machine so that the server doesn’t die with the connection.

I did have some trouble getting the image into the write format though, after messing around a little I settled on this command using ImageMagick:

```
convert image.png -depth 1 -monochrome BMP3:LOGOIN.BMP
```

Then you can use `identify`

to check that the metadata of the output is indeed 1-bit in depth.

```
identify LOGOIN.BMP
```