Welcome to NBSoftSolutions, home of the software development company and writings of its main developer: Nick Babcock. If you would like to contact NBSoftSolutions, please see the Contact section of the about page.

WireGuard VPN Walkthrough

VPN hill VPN greenfield

With the rise of privacy consciousness, people are looking to solutions like a hosted VPN (I hear one should never use a free service), or self hosted like streisand and algo. How does a VPN (a remote access VPN – not a site-to-site VPN for the pedantic) help maintain privacy?

In the scenario of maintaining privacy or getting around geographic content blocking, the VPN connects you to a server, oftentimes in a different country, where it forwards all your traffic to the intended recipient. The recipient responds to the server, which dutifully forwards back to you. So, if you live in the US, but are VPNed into a German server and request content from India, India will think you’re in Germany (this assuming countries have thoughts </joke>).

I’m going to show how to self host WireGuard, which bills itself as easier to configure than IPSec and OpenVPN, while being faster and more powerful. WireGuard is a component feature of of streisand, but since we’re going to be dealing with only a linux client and server setup we cut out the streisand middleman and just use WireGuard. Theoretically, this cuts down on the bloat and attack surfaces that are inherent with the wide array of software that streisand installs (streisand is planning on supporting modular install in the future).

It should be noted:

WireGuard is not yet complete. You should not rely on this code. It has not undergone proper degrees of security auditing and the protocol is still subject to change.

This demonstration will be on a DigitalOcean Ubuntu 16.04 box, but it should be easily adaptable for other platforms (as long as they are linux based).

Script

The following script is to be executed on one’s server. This script will be subsequently broken down.

#!/bin/bash

# The client's public key (generated in subsequent section client side)
CLIENT_PUGLIC="<ENTER>"

sysctl -w net.ipv4.ip_forward=1
add-apt-repository --yes ppa:wireguard/wireguard
apt-get update
apt-get install --yes wireguard-dkms wireguard-tools

wg genkey | tee privatekey | wg pubkey > publickey

PRIVATE=$(cat privatekey)
echo "public: $(cat publickey)"

cat > /etc/wireguard/wgnet0.conf <<EOF
[Interface]
Address = 10.192.122.1/24
SaveConfig = true
ListenPort = 51820
PostUp = iptables -A FORWARD -i wgnet0 -j ACCEPT; iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE
PostDown = iptables -D FORWARD -i wgnet0 -j ACCEPT; iptables -t nat -D POSTROUTING -o eth0 -j MASQUERADE
PrivateKey = $PRIVATE

[Peer]
PublicKey = $CLIENT_PUBLIC
AllowedIPs = 10.192.122.2/32
EOF

Simple enough, how does this work?

sysctl -w net.ipv4.ip_forward=1

Sysctl allow modifying kernel parameters at runtime, so here we’re allowing the kernel to forward packets from one network interface to another. We need this, as wireguard works by creating the VPN on another network interface (commonly called wg0 or wgnet0). This interface, by itself, does not have internet access, but with ip forwarding we can foward traffic from the VPN to the interface that can communicate with the internet.

Forwarding is only important for the server because once connected to the VPN the default client interface won’t be used anymore.

add-apt-repository --yes ppa:wireguard/wireguard
apt-get update
apt-get install --yes wireguard-dkms wireguard-tools

These commands fetches the latest wireguard version and installs it. Since WireGuard hooks into the kernel, it attempts to automatically detect the correct kernel to hook into. This should work flawlessly.

The one problem I’ve had is that for DigitalOcean controls the kernel through their web interface (one can use a custom kernel but that is outside of the scope of this post). Anyways, if you had tried to install a custom kernel ontop of the one in DigitalOcean, wireguard will skip the correct kernel as it believes it’s chrooted. Sorry for the tangent, but since I experienced this problem, I figured I should document it for others.

wg genkey | tee privatekey | wg pubkey > publickey

Both the client and server need to generate a pair of keys. The server does not need to know the client’s private key and vice versa; however they do need to know each other’s public key to permit only authorized use of the VPN (else anyone who knew your VPN server’s address could use your VPN).

[Interface]
Address = 10.192.122.1/24
ListenPort = 51820

When clients connect to the server, they can communicate directly with by using the 10.192.122.1. We know that 10.192.122.1 can’t possibly be an internet facing box because it falls under a private network. The /24 is a CIDR subnet mask that states that this VPN will is capable of housing 254 clients. WireGuard then listens on port 51820 for interested clients.

PostUp = iptables -A FORWARD -i wgnet0 -j ACCEPT; iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE
PostDown = iptables -D FORWARD -i wgnet0 -j ACCEPT; iptables -t nat -D POSTROUTING -o eth0 -j MASQUERADE

Probably the most convulted section of the config, yet this step must not be skipped. This is how those lines configure the firewall.

When the VPN is created:

  • We accept packets from our VPN interface for packets being routed through the box
  • Then whenever a new connection is created (eg. our client wants to access google.com, so the server needs to connect to google.com now), the outgoing packets are altered to have the server IP address so google.com responds to the server, which relays it back to the client.

Then when the VPN is destroyed everything in our firewall is deleted.

If you forget those lines, when you go to connect as a client your requests will blackhole and it may appear as if you lost internet connection.

[Peer]
PublicKey = $CLIENT_PUBLIC
AllowedIPs = 10.192.122.2/32

The peer section is for client information. The client that connects with the given client public key is assigned 10.192.122.2 for their IP address.

Client Config

The server is all setup so what does the client configuration look like?

[Interface]
Address = 10.192.122.2/32
PrivateKey = CLIENT_PRIVATE_KEY
DNS = 10.192.122.1

[Peer]
PublicKey = SERVER_PUBLIC_KEY
AllowedIPs = 0.0.0.0/0
Endpoint = SERVER_IP:51820

Breaking down the config:

[Interface]
Address = 10.192.122.2/32
DNS = 10.192.122.1

The 10.192.122.2/32 matches the same address as the server.

We set the client’s DNS server to that of the VPN server. This is not needed, but I recommend it, as you want to communicate with servers (eg. google.com) that are closest to your VPN server to minimize latency. For instance, if you live in the US, VPNed into Singapore, and wanted google.com, you’d want to talk to Singapore Google server (and not the one’s in the US) so that packets travel the least distance.

Automation

It is actually possible to create and destroy VPN boxes on demand for next level privacy. Here is how one would do this:

  • Create DigitalOcean box by hand using the previous instructions.
  • Verify that the VPN works (wg-quick up wgnet0 on both client and server).
  • Shutdown the server and take snapshot.
  • Using python-digitalocean one can create a new server from our snapshot (so the server will have the same public and private key, it’ll just have a new ip address)
  • Update your client config to reference server’s new IP address

And just like that you can create and destory VPNs all around the world in under a minute.


Robust systemd Script to Store Pi-hole Metrics in Graphite

serene metrics Ah, a serene lake of … metrics

I’m building a home server and I’m using Pi-hole to blackhole ad server domains in a test environment. It’s not perfect, but works well enough. The Pi-hole admin interface shows a dashboard like the following to everyone (admin users get an even greater breakdown, but we’ll focus on just the depicted dashboard)

pi-hole admin

It’s a nice looking dashboard; however, it’s a standalone dashboard that doesn’t fit in with my grafana dashboards. I don’t want to go to multiple webpages to know the health of the system especially since I want to know two things: is pi-hole servicing DNS requests and are ads being blocked. This post will walk through exporting this data from pi-hole into Graphite, which is my time series database of choice.

There is a blog post that contains a python script that will do this process. Not to pick on the author (they inspired me) but there are a couple of things that can be improved:

  • The python script is not standalone. It requires the requests library. While ubiquitous, the library is not in the standard library and will require either a virtual environment or installation into the default python
  • Any error will crash the program (eg connection refused, different json response, etc) and it’ll have to be manually restarted
  • No logs to know if the program is running or why it would have crashed
  • Does not start on computer reboot

An Alternative

Let’s start with making a network request:

curl http://$HOST:$PORT/admin/api.php?summaryRaw

returns

{"domains_being_blocked":116855,"dns_queries_today":8882 ... }

Ah, let’s turn to jq to massage this data, which will, by default, prettyify the output

{
  "domains_being_blocked": 116855,
  "dns_queries_today": 8882,
  "ads_blocked_today": 380,
  "ads_percentage_today": 4.278316,
  "unique_domains": 957,
  "queries_forwarded": 5969,
  "queries_cached": 2533,
  "unique_clients": 2
}

We somehow need to get the previous data into the <path> <value> <timestamp> format for carbon with lines seperated by newlines.

Since jq prefers working with arrays, we’ll transform the object into an array: jq 'to_entries'

[
  {
    "key": "domains_being_blocked",
    "value": 116855
  },
  {
    "key": "dns_queries_today",
    "value": 8954
  },
  // ...
]

Now we’re going to transform each element of the array into a string of $key $value with jq 'to_entries | map(.key + " " + (.value | tostring))'. Value is numeric and had to converted into a string.

[
  "domains_being_blocked 116855",
  "dns_queries_today 8966",
  "ads_blocked_today 385",
  "ads_percentage_today 4.294",
  "unique_domains 961",
  "queries_forwarded 6021",
  "queries_cached 2560",
  "unique_clients 2"
]

Finally, unwrap the array and string with a jq -r '... | .[]' to get:

domains_being_blocked 116855
dns_queries_today 9005
ads_blocked_today 386
ads_percentage_today 4.286508
unique_domains 962
queries_forwarded 6046
queries_cached 2573
unique_clients 2

We’re close to our desired format. All that is left is an awk oneliner:

awk -v date=$(date +%s) '{print "pihole." $1 " " $2 " " date}' >>/dev/tcp/localhost/2003

So what does our command look like?

curl --silent --show-error --retry 5 --fail \
       http://$HOST:$PORT/admin/api.php?summaryRaw | \
    jq -r 'to_entries |
           map(.key + " " + (.value | tostring)) |
           .[]' | \
    awk -v date=$(date +%s) '{print "pihole." $1 " " $2 " " date}' \
    >>/dev/tcp/localhost/2003

Is this still considered a one-liner at this point?

I’ve add some commandline options to curl so that all non-200 status codes are errors and that curl will retry 5 times up to about a half a minute to let the applications finish booting.

We could just stick this in cron and call it a day, but we can do better.

systemd

systemd allows for some nice controls over our script that will solve the rest of the pain points with the python script. One of those pain points is logging. It would be nice to log the response sent back from the API so we’ll know what fields were added or modified. Since our script doesn’t output anything, we’ll capture the curl output and log that (see final script to see modification, but it’s minimal).

With that prepped, let’s create /etc/systemd/system/pihole-export.server

[Unit]
Description=Exports data from pihole to graphite

[Service]
Type=oneshot
ExecStart=/usr/local/bin/pi-hole-export
StandardOutput=journal+console
Environment=PORT=32768
  • Type=oneshot: great for scripts that exit after finishing their job
  • StandardOutput=: Has the stdout go to journald (which is indexed, log-rotated, the works). Standard error inherits from standard out. Read a previous article that I’ve written about journald
  • Environment=PORT=32768: sets the environment for the script (allows a bit of configuration)

After reloading the daemon to find our new service, we can run it with the following:

systemctl start pihole-export

# And look at the output
journalctl  -u pihole-export

If we included an exit 1 in the script, the status of the service would be failed even though it is oneshot and the log file will let us know the data that failed it or if there was a connection refused (printed to standard error). This allows systemd to answer the question “what services are currently in the failed state” and I’d imagine that one could create generic alerts off that data.

One of the last things we need to do is create a timer to be triggered every minute.

[Unit]
Description=Run pihole-export every minute

[Timer]
OnCalendar=*-*-* *:*:00
AccuracySec=1min

It might annoy some people that the amount of configuration is about the same number of lines as our script, but we gained a lot. In a previous version of the script, I was preserving standard out by using tee with process substitution to keep the script concise. This resulted in logs showing the script running every minute, but the data in graphite only captured approximately every other point. Since I knew from the logs that the command successfully exited, I realized process substitution happens asynchronously, so there was a race condition between tee finishing and sending the request. Simply removing tee for a temporary buffer proved effective enough for me, though there reportedly are ways of working around the race condition.

Final script:

#!/bin/bash
# Script to stop immediately on non-zero exit code, carry exit code through
# pipe, and disallow unset variables
set -euo pipefail

OUTPUT=$(curl --silent --show-error --retry 10 --fail \
       http://localhost:$PORT/admin/api.php?summaryRaw)
echo $OUTPUT
echo $OUTPUT | \
    jq -r 'to_entries |
           map(.key + " " + (.value | tostring)) |
           .[]' | \
    awk -v date=$(date +%s) '{print "pihole." $1 " " $2 " " date}' \
    >>/dev/tcp/localhost/2003

Let’s review:

  • New script relies on system utilities and jq, which is found in the default ubuntu repo.
  • Logging output into journald provides a cost free debugging tool if things go astray
  • Anything other than what’s expected will cause the service to fail and notify systemd, which will try it again in a minute
  • Starts on computer boot

Sounds like an improvement!

Grafana

Now that we have our data robustly inserted into graphite, now to time to graph it! The two data points we’re interested in are dns_queries_today and ads_blocked_today. Since they are counts that are reset after 24 hours, we’ll calculate the derivative so we can get a hitcount.

grafana dashboard

alias(summarize(nonNegativeDerivative(keepLastValue(pihole.dns_queries_today)),
	'$interval', 'sum'), 'queries')

alias(summarize(nonNegativeDerivative(keepLastValue(pihole.ads_blocked_today)),
	'$interval', 'sum'), 'blocked')

The best part might just be that I can add in a link in the graph that will direct me to the pi-hole admin in the situations when I need to see the full dashboard.


Building a Home Server

oysters RAID-0 of oysters

I wrote this article not because I’ve built a home server but because I’m on the verge of doing it and I’d like to justify (to myself) why building one is reasonable and why I chose the parts I did.

I want to build a home server because I realize that I have content (pictures, home movies, etc). This content used to be stored on an external hard drive, but when that hard drive died, I lost a good chunk of it. Since then, I’ve moved the rest to a Windows Storage Pool. But then I thought about accessing the content remotely, and I didn’t want my work / gaming PC to be on 24/7 and exposed to the internet for power efficiency and security respectively. Having an overclocked CPU and GPU on 24/7 (even if at idle) isn’t ideal.

Using Onedrive has fine – great for editing online documents, but space is limited and I want a better sharing story (eg. loved one’s backup their pictures here too). Though, probably the most important reason is because setting up a home system sounds fun to me and it’s a good learning opportunity.

First up the part list:

Type Item Price
CPU Intel - Pentium G4600 3.6GHz Dual-Core Processor $86.99 @ Amazon
CPU Cooler Noctua - NH-L9i 33.8 CFM CPU Cooler $39.15 @ Newegg
Motherboard ASRock - E3C236D2I Mini ITX LGA1151 Motherboard $239.99 @ Newegg
Memory Kingston - ValueRAM 16GB (1 x 16GB) DDR4-2133 Memory $159.79 @ Amazon
Storage 6x Seagate - Desktop HDD 4TB 3.5" 5900RPM
Case Fractal Design - Node 304 Mini ITX Tower Case $74.99 @ Newegg
Power Supply Silverstone - 300W 80+ Bronze Certified SFX Power Supply $49.99 @ Amazon
Prices include shipping, taxes, rebates, and discounts
Total $650.90
Generated by PCPartPicker 2017-08-08 19:54 EDT-0400

The Case

The case really defines the rest of the build, so I’m starting here. A small form factor (SFF) case will limit oneself to more expensive components while a larger case will take up more room. I waffled between many cases – I was trying to get a small case that would fit on a shelf in the utility closet, but wouldn’t compromise on the number of 3.5” drives. The height restriction meant a lot of decent mini tower cases were excluded because even they were too tall. Here were the contenders:

SilverStone DS380B

ds380b

A SFF case with 8 hot-swappable 3.5” drive bays inside + more is quite an achievement and is the only option when SFF is needed with an absolute maximum number of drives. The downsides are that at $150 it was on the pricey side and many reviews stated that thermal management was a challenge so aftermarket fans with case modding is a necessity. This guy wrote an article solely to convince people not to buy the DS380B. Anyways, one goal of this build is to keep cost and effort to a minimum, so this case was eliminated.

SilverStone GD06

ds380b

An HTPC that has a lot going for it. The horizontal design makes it a alluring, as it can be placed on one of my cabinets. But with only four slots for 3.5”, it would be limited as far as a storage server is concerned. Double parity RAID would mean that half of the drives are redundant. The worst thing that could happen would be running out of room and being forced to decide on whether to buy bigger drives or get a dedicated NAS case.

Fractal Design Node 304

ds380b

A SFF cube case that has 6 3.5” bays, goes on sale for $60, has great reviews, and touted for the silent fans!? Sold.

Lian Li PC-Q25

ds380b

Special mention must be made to Lian Li’s case which houses 7 drive bays, costs more, and some (not many) have reported thermal issues.

The CPU

I’ve decided on the Pentium G4600.

  • With Kaby Lake, pentium processors are blessed with hyper threading so their 2 physical cores become 4 logical cores.
  • Kaby Lake also improved power efficiency with stress testing a G4560 using only 24W.
  • All “Core” chips don’t support ECC memory (thus excluded)
  • Paying 15% more a 100 MHz boost made me exclude the G4620
  • I actually wanted top of the line integrated graphics card (Intel HD Graphics 630) because there won’t be a dedicated GPU in this box and I will cry if I was GPU limited anywhere.
  • Cheap! I’m going to grab it when the price hits $80
  • The server will sit idle most of its life, so no need to get a powerful CPU. In the future, if it turns out I need more horsepower, by then there should be a nice array of secondhand kaby lake xeons out there.

The Motherboard

A Mini-ITX motherboard that supports ECC memory, socket 1151, and the Kaby Lake basically makes our decision for us!

There were a couple of ASRock boards and I went with E3C236D2I, the one with six SATA ports (same as case) with an added bonus of IPMI.

Unfortunately, a $240 price tag is a bit hard to swallow. There is definitely a price to pay for keeping the size down, but using enterprise RAM!

The RAM

Speaking of the RAM, I went with a single stick of 16GB ECC RAM. This may seem odd, but I’ll try and explain. I’m using ECC memory because I want to be safe rather than sorry and I’m not scrounging around looking for pennies so I can afford it. I’m only interested in a single stick because buying 32GB upfront seems overkill, I’m not made of money. Since the motherboard only has two DIMM slots I wanted something significant that should last in the meantime.

On a side note, RAM is expensive right now, the 16GB of RAM is retailing for $150 whereas it debuted at $75. Don’t worry, I have price triggers.

CPU Cooler

Even though the case supports tower CPU coolers and the Pentium G4600 comes with a stock cooler, I’ve opted for the slim aftermarket Notctua CPU Cooler: Noctua NH-L9i. The Noctua promises to be much quieter than the stock cooler. Since the fan is so slim, if I decide to get an even tinier case in the future, the fan will fit!

Since I won’t be overclocking the CPU, I’ll be able to use the low noise adaptor to make the cooler even quieter.

Power Supply

I went with the SilverStone 300W power supply.

  • I couldn’t buy anything that was below 300W (I was shooting for something around 200W). The reason is that power supplies were made to operate between 20% and 100% of their rated wattage. If I had gone with a 450W power supply (the next power supply in SilverStone’s lineup), I’d need a idle usage of at least 90W instead of 60W to get that guaranteed efficiency. Basically, this is me being environmentally conscious.
  • 80+ bronze rating was distinguishing in this low of power range
  • SFX form will allow me to get an even smaller case in the future if needed
  • Is semi fanless (quiet). People report that only under extreme duress does the fan turn on

Storage

I already have a couple of 4TB Seagate 3.5” drives, so getting more of them is a logical choice. Ideally, I wouldn’t have to buy all of them up front, but that is cost of ZFS. Here’s to hoping a get a good deal on them!

One of the things I’m still pondering is what I should do about a bootable drive. I could drop down to using a RAID of 5 drives and get a different drive for the OS. Brian Moses uses a flash drive. I’m actually thinking of using my one PCIe slot to host a M.2 PCIe adapter and grabbing a Samsung 960 EVO or something similar. PCPartPicker doesn’t list the motherboard as capable of using the M.2, but we’ll see about that, as the motherboard manual specifically calls out instructions for M.2 NVMe drives.

Update: The motherboard does support M.2 drives, but only of the smallest kind (form factor 2242), which really eliminates potential drives. I’ll have to look towards 2.5” Sata SSDs.

Software

After trying for a week to get FreeBSD and Plex working together, I gave up and have decided that Ubuntu 16.04 with docker is the way forward. Let me explain:

The first task was to determine whether to use a hardware RAID controller or a software based one. Searching around, it became clear that a software RAID was better due to costs and features from file systems like ZFS. Speaking of ZFS, it’s the best file system for a home server, as it is built for turning several disks into a one and features compression, encryption, etc.

Choosing ZFS, it would make sense if the OS was FreeBSD. ZFS and FreeBSD go together like bread and butter. They are tried and tested together. Since I was (and still am) unfamiliar with FreeBSD, I spent a week learning about jails and other administrative functions. The concept of jails (application isolation without performance cost) sounded amazing. Not to mention FreeBSD seemed like a lightweight OS. Running top would only show a dozen or so processes. I got quickly to work setting up a FreeBSD playground inside a virtual machine.

First I tried setting up an NFS server but ran into problems as I needed NFS v4 to run an NFS server on nested ZFS filesystems, but NFS v4 isn’t baked into Windows, so it was a no go. Then after only a couple hours of fighting with SMB, I finally got it working. I’m just going to squirrel away the config here for a rainy day:

[global]
workgroup = WORKGROUP
server string = Samba Server Version %v
netbios name = vm-freebsd
wins support = No
security = user
passdb backend = tdbsam
domain master = yes
local master = yes
preferred master = yes
os level = 65

# Example: share /usr/src accessible only to 'developer' user
[pool]
path = /pool/data
valid users = nick, guest
writable  = yes
browsable = yes
read only = no
guest ok = yes
public = no
create mask = 0666
directory mask = 0755

I think the trick was that I wanted SMB users to be users on the VM, so the Samba server should act as the master.

So as you can see, everything was going smoothly – that is until I tried setting up Plex. I thought that since plexmediaserver was on FreshPorts that everything should work. It didn’t, and since I didn’t know FreeBSD, ZFS, or plex, I went on a wild goose chase of frustration. The internet even failed me, as the errors I was searching came back with zero results.

In a fit, I created an Ubuntu VM and ran the plex docker container and everything just worked. I gave up FreeBSD right then and there. I wasn’t going to force something. I later found out that since FreeBSD represented less than 1% of plex’s user base, the team didn’t want to spend the resources for updates. Oh well, ideally I wouldn’t have to use docker (downloading all those images seem … bloated), but since it’s rise to ubiquity and promise of compatibility, I’ll hop on the bandwagon.

With that, let’s take a look at some of the applications I’m looking to run:

  • ddclient: A dynamic dns client. It keeps my dns records updated whenever my ISP decides to give me a new IP.
  • nginx: A webserver that will serve as a reverse proxy for all downstream applications. Will be able to use certificates from Let’s Encrypt without configuring each application.
  • collectd: A system metric gather (CPU, Memory, Disk, Networking, Thermals, IPMI, etc). This will send the data to:
  • graphite: Using the official graphite docker image to store various metrics about the system and other applications. These metrics will be visualized using:
  • grafana: Using the official grafana docker image creates graphs and dashboards that are second to none. Just look what I did for my home PC
  • plex: Using the official plex docker image will be used to host the few movies and shows that I have lying around.
  • nextcloud: Using the official nextcloud docker image will be essential for creating my own “cloud”. I can even use extensions to access my keepass or enable two factor authentication
  • gitea: Using the official gitea docker image, I’ll be hosting my private code here.
  • jenkins: The official jenkins docker will build all the private code
  • rstudio: The rocker docker image will let me access my rstudio sessions when I’m away. Currently, I have a digitalocean machine with rsutdio, but it’s been a pain for me to create and destroy the machine every time I need it.
  • pi-hole: blocks ads on the DNS level so one can block ads on all devices on the network. And, of course, there is a docker image, which has been working wonderfully in my test playground.

You’d be wrong if you thought I’d abandon my current cloud storage providers (Onedrive, Google Drive, etc). In fact, I pay them for increased storage because stuff happens and I need to have backups of pictures, home videos, code, and important documents. I’m planning on keeping all the clouds in sync with everything encrypted using rclone. That way if a backup is compromised, it is no big deal.

I’m also not going to abandon DigitalOcean, as those machines easily have more uptime and uplink than Comcast here. My philosophy is that if I want to show people my creation, I’ll host it externally, else I’ll self-host it. Plus it is a lot easier to tear down and recreate machines with an IAAS rather than bare metal.

The only question now is … when will I jump head first?