Initial Impressions of a Website with Modern Build ToolsPublished on
The tutorial mentioned was a Pluralsight course I had signed up for with a temporary free account due to my
It’s absolutely critical that one puts foo.js before bar.js
So in mid-2013, the code below was state of the art.
Forget the bragging. Other frameworks blow the ad-hoc one I created out of the water and I’ve always been on the lookout.
KnockoutJS and D3
The project was to improve the University of Michigan’s Incident Log, which did not even look as good as it is now, so that users could see the incidents on a map along with cool, but purposeless statistics. The result is umich.nbsoftsolutions.com. You’ll notice that it is no longer working because the University upgraded their website so my custom scraper using BeautifulSoup broke.
For the frontend, there were about ten different
script tags from CDNs like cdnjs, which had promises that if your site served assets previously seen by a visitor, the visitor could use their cached version. A couple restrictions applied, which limited the usefulness. The URL had to be the same, so the asset had to be from cdnjs and the library version had to be the same. On the positive side, it could lighten server resources and load the site faster if the CDN harnessed geolocation to serve assets from the server closest to the visitor.
I wrote a JSON API in Flask as opposed to what I had known at the time, Bottle. For me, Flask > Bottle, but Bottle still has its uses. I don’t understand why Bottle is contained in a single large file. Seems brittle.
What cracks me up is a comment I have in my Flask API about sending a certain JSON payload:
So worrying about the 16KB of data is being paranoid.
D3, the data visualization library, was the most outside my comfort zone. If it wasn’t for Mike Bostock and his countless examples (like how I repurposed this one) I probably would not have succeeded. I had the hardest time wrapping my head around the
enter function. Despite my unexpected learning hump D3 is one of the few libraries (some may even call it a framework) that is still timeless, as I have a project that uses D3 for its math and path calculations, but less on direct DOM manipulations. I’m excited that D3 is planned to be broken up into modules!
I’ve been a closet fan of Ember since at least v1.6.0 (Summer of 2014). I remember
v1.6.0 so distinctly because I had just started an internship at a hedgefund, and my internship project begged for something like Ember. I pleaded my case if only to satiate my drive to learn and try my hand at web development (who cares if an intern fails!?). My wish was not granted (more like crushed!), so I fell back to reading blog posts and following releases while I waited for a side project where I could use Ember.
For someone on the outside, the appeal of Ember is it’s community. It seemed so well put together – not driven by some mega corporation with dubious intentions. The people behind are brilliant, Tom Dale and Yehuda Katz to name a few. It was a community I wanted to be a part of.
Finally in March of 2015, I got an excuse to start using Ember. A few months earlier I had sat next to a fascinating statistician on an train from Chicago to Ann Arbor. With little to no programming experience I was able to explain to him multi-threaded and synchronization issues with airplanes sharing a runway analogy and he grokked it far better than I expected. (Shortly after, I wrote Await Async in C# and Why Synchrony is Here to Stay and C# and Threading: Explicit Synchronization isn’t Always Needed). My new friend, in March 2015, had just arrived in Africa and needed a website where African startups could aggregate their data for potential investors.
With little requirements and my imagination to go on, I started hacking away. I volunteered myself for weekly updates to hold me accountable but also have a steady form of communication. Initially, this worked great and progress was made; however this trend did not last. I started stalling on Ember itself. The community was at an awkward pass with Ember 1.x and 2.x, there was movement to migrate to a “pod” directory structure, Ember CLI was in constant churn and had notoriously bad performance on Windows, and the Ember libraries I wanted to incorporate always seemed to be waiting for the next release to change their API. Perhaps not surprisingly, about a month of development my enthusiasm ran out of steam. My updates became repeatedly shorter and my friend soon became silent.
It’s for the best that I stopped working on the project because I was going to be reinventing a CMS, and nobody wants to be stuck with that task.
A Node Diversion
These tests would then be executed in a Node environment. I briefly tried to see if the tests would execute in a browser environment but was unable to (this is solved with a compilation step discussed later).
Below are my impressions on using some of the technologies listed in the post.
Playing with CSS in this project blew my mind. I would previously put all my LESS or CSS in a single file to reference from HTML, but it is now possible, and some may even recommend in this componentized world, to create a CSS file for each component. Traditionally, this would have resulted in many files or a possibility for class names that would conflict across components.
Enter cssnext. Let’s say we create a file called
Those familiar with other CSS preprocessors like LESS or SASS, this will seem familiar:
- Copy and paste contents of Main.css into said file
- Create a class named
- any paragraph (
p) or (
li) directly under an element with
textualhas modified text properties
How do we use
About.js we can import our CSS and access the
styles.textual and not the class’ literal name
'textual' because CSS Modules hashes the class names so there is no chance of collisions of classes with the same name on different components.
The problem arises when we start testing React components. Don’t worry, I hear everyone is moving to Enzyme, so I prepared it, installed it, and cracked my knuckles. Baby steps, let’s start with the basic component:
And the corresponding baby test for our baby component
This test will actually fail to compile. Do you know why? If you guessed CSS Modules, you’d be correct. Mocha and babel don’t know how to deal with css (and why should they)! A little bit of googling leads us to writing a
setup.js require script for mocha:
Tests now passing!
Unfortunately, this euphoric feeling soon passed. Take a look at the next component. Based on the last showcase, this one should be easier to guess what the problem is.
Ah, we’re importing a markdown module. Same issue as last time, except this time no one has written a ‘markdown-require-hook’. Pondering this thought for a couple hours made me realize that I may have been going down the wrong path because I didn’t want to add a new
require dependency for the tests every time it’s needed in the source.
karma.conf.js that worked! It’s clear skies from here!
Well, no, not exactly. There may be a series of steps one needs to take as they dive more into testing components extensively. For instance, when the component tests needed a real DOM, compilation errors started cropping up and the only fix was advice from the following Github thread. The fix was to add more info to my config files, which I’m basically copying and pasting blindly into. Luckily, it worked, but crawling Github/stackoverflow for solutions to copy and paste seems worrying.
On a different note, a pleasant surprise with our setup is that our CSS Modules are compiled the same way so we can use
styles.foobar to reference the compiled name!
It is important for any application to have benchmarking because it ensures that the hot spots in your code base don’t slow your application down enough for the user to notice. When I deployed the first version of the site, I noticed significant lag between when the user would select a date with no incidents and the suggestion text popping up for a better date.
When faced with slower than expected code, the first step is to profile the production code to find what the bottleneck is. Don’t guess and don’t use development code. Most browsers have profiling tools that will indicate code that needs improvement. Below is a screenshot of Chrome’s profiling tool where optimizations are not needed:
The profiling tool doesn’t present helpful function names, but it contains a link to the source mapped source code, which will help track down the issue
There is no need for optimizations because the code is idling for most the time, but it wasn’t always like this. When I noticed the lag in the site, the profiling showed a lot of time was lost in calling
momentjs functions. Now it’s time for the optimizations.
One thing that I learned through benchmarking is that moment.js, for all of it’s ergnomics, is an extremely slow library and one is much better off writing their own functions. In fact, the benchmarks show that the new method is nearly two orders of magnitude faster than the equivalent moment.js function.