Welcome to NBSoftSolutions, home of the software development company and writings of its main developer: Nick Babcock. If you would like to contact NBSoftSolutions, please see the Contact section of the about page.

The Difficulty of Performance Evaluation of HikariCP in Dropwizard

phone versus Benchmarking connection pools can feel like you’re making as much progress as waves against rocks

May 2nd 2017 Update

The author of HikariCP posted a really informative comment here on several shortfalls and that the test wasn’t really a fair comparison between HikariCP and Tomcat. This article is provides context for the comment. Once read I also urge readers to follow the conversation between Brett and I over on the HikariCP mailing list. I created a benchmark repo so that others may reproduce results.


By default, Dropwizard bundles Tomcat JDBC for database connection pooling. Tomcat JDBC boasts safety and performance. However, Tomcat JDBC isn’t the only competition, HikariCP exists (among others), and claims to significantly beat other connection pools (including Tomcat) in terms of safety and performance. Just look at the advertised JMH Benchmarks to see Hikari at 10x connection cycles than the closest competitor and 2x the number of statements cycles.

What happens when we replace Tomcat JDBC with Hikari? Well, I’m not the first, second, or third to have this thought. The only performance metrics offered is from the first project, and they paint a very different story with HikariCP up to ten times slower! I set out to verify these numbers.

As an aside, this article is titled “The Difficulty of” because there is not a clear conclusion to this article. Emebedding database connection pools inside of a web connection pools that sits on the same box as the configured database on a configured virtual machine benchmarked from another virtual machine introduces nearly an uncountable number of options and knobs. But I believe in reproducibility and have included my setup and findings so that others may have a better inkling for performance.

Our experiment will be to create an endpoint as a frontend to querying Stackoverflow questions by user id.


The data used in this experiment is the dataset (200MB gzip link) from a “A simple dataset of Stack Overflow questions and tags”.

Below is a snippet of the data using the excellent xsv tool:

$ gzip -d -c questions.csv.gz | xsv slice --end 10 | xsv table

Id  CreationDate          ClosedDate            DeletionDate          Score  OwnerUserId  AnswerCount
1   2008-07-31T21:26:37Z  NA                    2011-03-28T00:53:47Z  1      NA           0
4   2008-07-31T21:42:52Z  NA                    NA                    458    8            13
6   2008-07-31T22:08:08Z  NA                    NA                    207    9            5
8   2008-07-31T23:33:19Z  2013-06-03T04:00:25Z  2015-02-11T08:26:40Z  42     NA           8
9   2008-07-31T23:40:59Z  NA                    NA                    1410   1            58
11  2008-07-31T23:55:37Z  NA                    NA                    1129   1            33
13  2008-08-01T00:42:38Z  NA                    NA                    451    9            25
14  2008-08-01T00:59:11Z  NA                    NA                    290    11           8
16  2008-08-01T04:59:33Z  NA                    NA                    78     2            5
17  2008-08-01T05:09:55Z  NA                    NA                    114    2            11

The command finishes instantly as xsv asks gzip to stop decompressing after the first ten rows. The efficiency of that statement makes me giddy; like I’m clever.


Our database of choice will be Postgres, but we’ll need to configure the box to aid performance. PostgreSQL 9.0 High Performance comes chock-full of performance tips. The following tips were applied from the book:

  • Set the disk read ahead to 4096: blockdev –setra 4096 /dev/sda
  • Prevent the OS from updating file times by mounting the filesystem with noatime
  • vm.swappiness=0
  • vm.overcommit_memory=2

Caveat, it is most likely that none of these tweaks will have a significant impact because query will be against a single user (so Postgres won’t have to go far to fetch the data from its cache).

The following Postgres configurations were taken from PGTune for a 4GB web application.

max_connections = 200
shared_buffers = 1GB
effective_cache_size = 3GB
work_mem = 5242kB
maintenance_work_mem = 256MB
min_wal_size = 1GB
max_wal_size = 2GB
checkpoint_completion_target = 0.7
wal_buffers = 16MB
default_statistics_target = 10


There’ll only be one table for all the data. We’ll first load the data using Postgres’s awesome COPY command (the data should be uncompressed first unless using the PROGRAM directive)

CREATE TABLE questions (
    id serial PRIMARY KEY,
    creationDate TIMESTAMPTZ,
    closedDate TIMESTAMPTZ,
    deletionDate TIMESTAMPTZ,
    score int,
    ownerUserId int,
    answerCount int

COPY questions FROM '/home/nick/questions.csv'

CREATE INDEX user_idx ON questions(ownerUserId);

Notice the index on the user id was created at the end – for performance reasons.

The Java

For the application code, the database interactions are through JDBI, which allows you to write the SQL statements, in an elegant manner. I realize ORMs exist, but I’m much more comfortable writing the SQL statements myself.

Below we have our query by user id to retrieve their questions asked.

public interface QuestionQuery {
    @SqlQuery("SELECT id, creationDate, closedDate, deletionDate, score, ownerUserId, answerCount\n" +
            "FROM questions WHERE ownerUserId = :user")
    List<Question> findQuestionsFromUser(@Bind("user") int user);

We map the SQL results into our POJO Question. One thing to note is that I reference columns by column index. In hindsight, I should have just referenced the column name like I normally do instead of this nonesense. I can tell you that I did not get the code right on my first try (I didn’t realize that column indexing started at 1 and not 0). I was probably jealous that the Rust ORM, Diesel, will codegen serialization code at compile time using the index.

public class QuestionMapper implements ResultSetMapper<Question> {
    public Question map(int i, ResultSet r, StatementContext ctx) throws SQLException {
        final ResultColumnMapper<LocalDateTime> dtmapper = ctx.columnMapperFor(LocalDateTime.class);
        return Question.create(
                dtmapper.mapColumn(r, 2, ctx),
                dtmapper.mapColumn(r, 3, ctx),
                dtmapper.mapColumn(r, 4, ctx),

Our POJO uses AutoValue for simple Java value objects.

public abstract class Question {
    public abstract int serial();

    @JsonFormat(shape = JsonFormat.Shape.STRING, pattern = "yyyy-MM-dd")
    public abstract LocalDateTime creation();

    @JsonFormat(shape = JsonFormat.Shape.STRING, pattern = "yyyy-MM-dd")
    public abstract Optional<LocalDateTime> closed();

    @JsonFormat(shape = JsonFormat.Shape.STRING, pattern = "yyyy-MM-dd")
    public abstract Optional<LocalDateTime> deletion();

    public abstract int score();

    public abstract int ownerUserId();

    public abstract int answers();

    public static Question create(
            int serial,
            LocalDateTime creation,
            LocalDateTime closed,
            LocalDateTime deletion,
            int score,
            int owner,
            int answers
    ) {
        return new AutoValue_Question(

And our resource is about as simple as one could construct.

public class QuestionResource {
    private final QuestionQuery dao;
    public QuestionResource(QuestionQuery dao) {
        this.dao = dao;

    public List<Question> findQuestionsFromUser(@QueryParam("user") int userid) {
        return dao.findQuestionsFromUser(userid);

The Test Harness

Once we deploy the Java application to the server, it’s benchmarking time. The test covers three variables:

  • The max number of threads the webserver, Jetty, uses to serve requests
  • The number of threads used by the database connector
  • Which database connector to use: HikariCP / Tomcat.

Since one should not run the benchmarking code on the same box as the application, we move benchmarking code to another host to conserve limited resources. Once we move the benchmarking code to another box, we run into the problem of coordinating the start and stopping of tests with modifying the aforementioned variables. The solution is a script using ssh keys, but before that’s shown our base config looks like:

  minThreads: 7
  maxThreads: ${maxThreads}
  adminConnectors: []
    appenders: []

Yes, Dropwizard can use environment variables in configs.

The request log is disabled for performance.

Below is the actual script used to start and stop servers and load test using wrk.


# The HikariCP test case extends the base yaml with the following
# additional configuration
  datasourceClassName: org.postgresql.ds.PGSimpleDataSource
    'databaseName': 'postgres'
  user: nick
  minSize: ${poolSize}
  maxSize: ${poolSize}
  password: nick

# The Tomcat test case extends the base yaml with the following
# additional configuration. Notice how more configuration options
# are specified, one of the selling points of Hikari is that
# there are no "unsafe" options.
  driverClass: org.postgresql.Driver
  url: jdbc:postgresql://localhost/postgres
  user: nick
  minSize: ${poolSize}
  maxSize: ${poolSize}
  initialSize: ${poolSize}
  rollbackOnReturn: true
  checkConnectionOnBorrow: true
  autoCommitByDefault: false
  validationInterval: '1 second'
  validatorClassName: 'com.example.TomValidator'
  password: nick

# The actual function to do the load test
load_test () {

    # Nested for loop creates 42 configurations for each
    # test case, which causes this to be long script!
    for i in 1 2 4 8 16 32; do
    for j in 1 2 4 8 16 32 64; do
        export POOL_SIZE=$i
        export MAX_THREADS=$((6 + j))
        export CONFIG=$2
        echo "Pool: ${POOL_SIZE}. Server threads: ${MAX_THREADS}"
        echo "${YAML}" | ssh [email protected] "cat - config_base.yaml > config_new.yaml
            # Kill the previous server instance if one exists
            pkill java

            # Wait for it to cleanly shut down as we reuse ports, etc
            sleep 3s

            # Set the environment variables used in the configuration files
            # and use nohup so that when this ssh command (and connection) exits
            # the server keeps running
            poolSize=${POOL_SIZE} maxThreads=${MAX_THREADS} nohup \
                java -jar echo-db.jar server config_new.yaml >/dev/null 2>&1 &

            # Wait for the server to properly initialize
            sleep 5s"

        # Load test for 60 seconds with four threads (as this machine has four
        # CPUs to dedicate to load testing) Also use a custom lua script that
        # reports various statistics into a csv format for further analysis as
        # there'll be 80+ rows, with each row having several statistics.
        # We're using "tee" here so that we can see all the stdout but only
        # the last line, which is what is important to the csv is appended
        # to the csv
        wrk -c "$CONNECTIONS" -d 60s -t 4 -s report.lua ${URL} | \
            tee >(tail -n 1 >> wrk-$CONNECTIONS.csv)

# Call our function!
load_test "$TOMCAT_YAML" "tomcat" 300
load_test "$HIKARI_YAML" "hikari" 300

The custom lua script is the following (and this was my first time writing lua!). It’s nothing crazy.

done = function(summary, latency, requests)

The Results

HikariCP is both the best and the worst when it comes to mean response, 99th percentile in response time, and requests served. It all comes down to how the server and the pool is configured. A properly tuned Hikari pool will beat Tomcat; however, I know very little people who would take the time to benchmark until the right configuration is found. It’s not easy work. Each test run takes over an hour, and time is money. In a big corporpation these benchmarking tests could be ran in parallel, but when you’re one man show you wait that hour!

But for what is worth, I’ll let the data speak for itself. Since we saved our wrk data into a csv we can analyze this using R. For any graph or table, the source code to generate it is located underneath it.


columns <- c("config", "pool size", "max threads", "requests", "mean", "stdev", "p50", "p90", "p99")
df <- read_csv("/home/nick/Downloads/wrk-100.csv", col_names = columns)

# Subtract 6 from max threads because Jetty uses six threads to manage the request
# threads and other activities
df <- mutate(df, `max threads` = `max threads` - 6)

pool mean response

ggplot(df, aes(factor(`pool size`), mean, ymin=0)) +
  geom_jitter(aes(colour = config), width=0.15) +
  labs(title = "Mean response at different pool sizes",
       x = "DB Pool size",
       y = "Response time (us)")

For pool sizes greater than 1, most configuration have a mean time between 10ms and 15ms. With Hikari containing about 2/3 of the configurations below 10ms. We can see that between pool sizes of 2 to 8, Hikari contained fewer number of configurations where the average response time was above 20ms where compared to itself in at other pool sizes.

The worst mean response time belongs to Hikari in each category.

How does the trend hold when looking at the the 99th percentile in response times.

pool 99 response

ggplot(df, aes(factor(`pool size`), p99, ymin=0)) +
  geom_jitter(aes(colour = config), width=0.15) +
  labs(title = "99th Percentile at different pool sizes",
       x = "DB Pool size",
       y = "Response time (us)")

In an interesting turn of events, looking at 99th percentiles below 50ms, Tomcat edges out with more faster configurations, but Hikari still can claim the best 99th percentile across the configurations where pool size is greater than 1, which we can consider our baseline.

Now let’s move onto the number of requests that each configuration served at a given database pool size.

pool num requests

ggplot(df, aes(factor(`pool size`), requests, ymin=0)) +
  geom_jitter(aes(colour = config), width=0.15) +
  labs(title = "Requests served at different pool sizes",
       x = "DB Pool size",
       y = "Requests")

Hikari shined here by having a configuration that did significantly better than others in pool size 4. The next top 5 in that category were all Tomcat so potential celebration should be muted. Across all categories Hikari had the most and least requests served depending on the configuration.

Let’s narrow our focus to just a pool size of 4, as it seems to bring out the best performance, and see how configuring Jetty threads affects response times.

threads mean response

dodge <- position_dodge(width=.9)
df %>% filter(`pool size` == 4) %>%
  ggplot(aes(factor(`max threads`), mean, fill=config)) +
  geom_bar(stat='identity', position=dodge) +
  geom_errorbar(aes(ymin = p50, ymax = p90), position=dodge) +
  labs(title = "Mean response for DB pool size of 4 with [median, 90%] error bars",
       x = "Jetty Max Threads",
       y = "Response time (us)")

The graph may need a bit of an explanation:

  • The bars represent the mean response for that number of Jetty threads
  • The lower whisker is that median for that configuration. Notice how it is always lower than the mean, thus a response time graph would be scewed right
  • The upper whisker represents the 90th percentile of response time and gives a rough idea of how big the skew is.

The graph depicts that the best overall configuration is to use a hikari db pool of 4 connections with Jetty having 4 threads to accept incoming db requests (which, remember, corresponds to a maxThreads: 10 in the dropwizard config).

As Jetty allocates more threads to handle db requests, Hikari performance suffers under this contention worse than Tomcat, which explains why one needs to explore secondary variables as if I used the dropwizard default of maxThreads: 100 then Hikari would have “lost” across the board.


Why 4? Why so low? On the Hikari wiki, there is a great article about pool sizing, where I learned a lot. I highly suggest reading it. Anyways, 4 is also the number of “virtual processors” that are allocated to the benchmarking machine, which is spread amongst the application and the database.

By configuring Jetty to use 4 threads to accept 4 requests at a time, there is no contention for the connection pool. All backpressure is handled by Jetty. Why Jetty appears to handle backpressure better than Hikari is a bit of a puzzle to me, considering Hikari claims to performance better under contention than other database pools.


Here are some tables for various best and worst configurations.

Fastest 5 average time:

pool pool size jetty threads mean response (us)
hikari 4 4 8035
hikari 16 4 8174
hikari 8 8 8319
hikari 16 8 8436
hikari 32 4 8534
# Fastest 5 by mean response
df %>% top_n(-5, mean) %>%
  arrange(mean) %>%
  transmute(str = str_c("|", str_c(config, `pool size`, `max threads`, mean, sep="|"), "|"))

Slowest 5 average time:

pool pool size jetty threads mean response (us)
hikari 1 4 27325
hikari 32 64 26365
hikari 1 8 25113
hikari 1 2 24799
tomcat 1 8 24433
# Slowest 5 by mean response
df %>% top_n(5, mean) %>%
  arrange(-mean) %>%
  transmute(str = str_c("|", str_c(config, `pool size`, `max threads`, mean, sep="|"), "|"))

Fastest 5 by 99% percentile in response

pool pool size jetty threads 99% percentile (us)
hikari 16 8 35517
hikari 8 8 36363
hikari 16 16 37620
hikari 32 16 37666
tomcat 4 32 38342
# Fastest 5 by 99% percentile in response
df %>% top_n(-5, p99) %>%
  arrange(p99) %>%
  transmute(str = str_c("|", str_c(config, `pool size`, `max threads`, p99, sep="|"), "|"))

Slowest 5 by 99th percentile in response

pool pool size jetty threads 99% percentile (us)
hikari 32 64 157454
hikari 1 64 147926
hikari 16 64 145653
hikari 1 1 117293
hikari 8 64 115814
# Slowest 5 by 99% percentile in response
df %>% top_n(5, p99) %>%
  arrange(-p99) %>%
  transmute(str = str_c("|", str_c(config, `pool size`, `max threads`, p99, sep="|"), "|"))

Top 5 for requests served:

pool pool size jetty threads requests
hikari 4 4 869924
hikari 16 4 835446
hikari 8 8 802282
hikari 32 4 801283
tomcat 16 16 794392
# Top 5 for requests served
df %>% top_n(-5, requests) %>%
  arrange(requests) %>%
  transmute(str = str_c("|", str_c(config, `pool size`, `max threads`, requests, sep="|"), "|"))

Bottom 5 for requests served:

pool pool size jetty threads requests
hikari 1 4 226305
hikari 1 8 255766
tomcat 1 8 259909
hikari 1 2 260054
hikari 32 1 271876
# Bottom 5 for requests served
df %>% top_n(5, requests) %>%
  arrange(-requests) %>%
  transmute(str = str_c("|", str_c(config, `pool size`, `max threads`, requests, sep="|"), "|"))


Neither HikariCP or Tomcat were the clear winner. While HikariCP had the best performance, it also had the worst performance depending on configuration. Whereas Tomcat was able to execute at a consistent level.

HikariCP claims to be safer, but I don’t consider Tomcat unsafe.

Tomcat is used by default in Dropwizard, but HikariCP has third party modules to hook into Dropwizard.

Thus, I’ve become inconclusive. Your decision, but if you want to be sure, benchmark your application!


The application code:

public class EchoApplication extends Application<EchoConfiguration> {
    public static void main(final String[] args) throws Exception {
        new EchoApplication().run(args);

    public String getName() {
        return "Echo";

    public void initialize(final Bootstrap<EchoConfiguration> bootstrap) {
                new SubstitutingSourceProvider(bootstrap.getConfigurationSourceProvider(),
                        new EnvironmentVariableSubstitutor(true)

    public void run(final EchoConfiguration config,
                    final Environment environment) {
        final DBIFactory factory = new DBIFactory();

        // Since Java generic are not covariant we must use map:
        // http://stackoverflow.com/q/2660827/433785
        final Optional<PooledDataSourceFactory> tomcatFactory = config.getTomcatFactory().map(x -> x);
        final Optional<PooledDataSourceFactory> hikariFactory = config.getHikariFactory().map(x -> x);
        final PooledDataSourceFactory datasource = tomcatFactory.orElse(hikariFactory.orElse(null));
        final DBI jdbi = factory.build(environment, datasource, "postgresql");
        jdbi.registerMapper(new QuestionMapper());
        final QuestionQuery dao = jdbi.onDemand(QuestionQuery.class);
        environment.jersey().register(new QuestionResource(dao));

I created a custom PooledDataSourceFactory to set up the Hikari connections instead of reusing those in other dropwizard-hikari projects as they either didn’t expose the properties I wanted or didn’t derive from PooledDataSourceFactory. I won’t copy the whole class that I created as it is pretty “boiler-platey”, so here’s the main chunk:

public ManagedDataSource build(final MetricRegistry metricRegistry, final String name) {
    final Properties properties = new Properties();
    for (final Map.Entry<String, String> property : this.properties.entrySet()) {
        properties.setProperty(property.getKey(), property.getValue());

    final HikariConfig config = new HikariConfig();
    config.setPassword(this.user != null && this.password == null ? "" : this.password);
    return new HikariManagedPooledDataSource(config, metricRegistry);

If you’re wondering what the TomValidator is from the configuration, it’s a custom class to validate Tomcat connections.

/* A custom Tomcat connection validator, which returns true if the connection
   is valid. Taken from the HikariCP-benchmark project:
   https://github.com/brettwooldridge/HikariCP-benchmark/blob/c6bb2da16b70933fb83bdcdb662ce6bf1f7ae991/src/main/java/com/zaxxer/hikari/benchmark/BenchBase.java */
public class TomValidator implements Validator {
    public boolean validate(Connection connection, int validateAction) {
        try {
            return (validateAction != PooledConnection.VALIDATE_BORROW || connection.isValid(0));
        } catch (SQLException e) {
            return false;

First Impressions of the Google Pixel

photo of notebook taken by pixel Upclose photo of my ultrabook with the Pixel’s camera

I wasn’t lying when I said I was getting rid of my old Windows phone and grabbing myself a Google Pixel. I’ve had a wide range of emotions as I’ve been exploring the phone and ecosystem. It’s been less than a week, and things have been great, though there are still some rocky areas. This post mostly serves my needs for documenting my journey to a new platform.


I’m not sure how to correctly share my setup, so I’ll have to make do with just my pictures and words!

Let’s start with my home screen.


  • Nova Launcher Prime - $4.99
  • Zooper Widget Pro - $2.99
  • Whicons icon set

Clicking each of those icons will lead you to the default app (messenging, email, internet, and camera). Swiping up on each icon will bring a selection of other apps (screenshot below). This allows for a minimal home screen setup, yet having frequent apps at only a swipe away.


Swiping up anywhere on the screenshot (aside from the folders) brings up a list app drawer, reminisicent of Windows Phone.



  • Today’s date (without time!) is from the Ocea zooper set. Not 100% I’ll keep it – almost looks too trendy, but there seemed to a be a severe lack of widgets that showed just the date. Maybe they expect the user to delete the time elements in the zooper widget?
  • Heavily modified side info widget from the Parrot zooper widget collection to just show my next calendar item in two lines of text: the event description and event time. In hindsight, this is probably one of the easiest zooper widgets to create, so I could have started from scratch! Clicking on the text will bring up the calendar app. I wish it would open the event!
  • The forecast is from the material style widget pack. I also modified this widget such that the background became transparent. To do this, I had to figure out to take a screenshot, watch it upload, download on computer, open it up in paint to use the color picker functionality. I may have to replace this widget in the future as there is a bug when the forecast encompasses two months and the next month will have the previous month’s label. Clicking on the forecast will bring up a weather app.
  • I wish there was a widget where all it was an up or down arrow with the percentage representing the dow jones market activity for the day. Sometimes I forget to check on the market (which is not a bad thing), so as long as I get a sense for what the market doing (even at an artificial level), I’ll be satiated. If you haven’t gotten the feeling already, I’m into simplicity

Are these serious widget flaws? No, but I’m still not 100% satisfied, so I may look for future improvements. There are other widget tools out there, like KWGT. A reddit comment sums up the choices:

If you plan on primarily using other people’s widgets, then Zooper has a better library. However, if don’t mind the smaller selection and prefer creating your own widgets, KWGT is clearly the top.

KWGT has better tools, more functionality, and it is supported by an active dev. I would say it’s a little more complex than Zooper, but only because it provides so much functionality compared to Zooper.

Zooper is older than KWGT, and it has more widgets, but the dev of Zooper has pretty much abandoned it. It’s also a inferior product compared to KWGT, but it IS easier to use. I’d recommend buying KWGT and making the time investment into learning that. (By the way, KWGT is significantly easier on the battery than Zooper, which was very important to me).

Maybe in the future I’ll look towards KWGT, but right now I’m not noticing the battery issue and I’m not interested in continuously shelling out money for widget makers.


I’m not a big apps user (maybe because I was a Windows Phone user for four or so years). Now that I have access to a much larger ecosystem, I didn’t expect it to change, and for the most part it hasn’t, but I have started a collection.

  • I can use the Amtrak app again (it was removed from the windows store)
  • I can use the Chase app again (it was also removed from the windows store)
  • Take pictures of checks for my bank
  • Brokerage app
  • Memrise (highly recommend checking it out for learning a language)
  • Ventra app for Chicago transit

Overall the app quality is much higher than windows store apps, which should come to a surprise to no one. What I do find suprrising is that I felt like most of my apps on windows phone didn’t have ads, but I can’t seem to find an equivalent free app with no ads in the android ecosystem. I’m mostly thinking of podcast and weather apps. I’m somehow put off by the ads in these apps. And it’s not that I’m against paying for apps, as can seen by my purchases in the customization department. The thought of paying for every single app to get an ad-free experience is exhausting. I wish that I could pay a small fee to google and get an ad free experience across all apps. Something like the Brave browser’s Ad Replacement program.


  • Google calendar can’t call all forms of Webex’s correctly. It sometimes enters in the phone number as the access code. This wouldn’t be as annoying if I could paste the access code into the dialer, but once a call has started, one can’t paste into the dialer. Simply infuriating. iOS and Windows Phones solved this problem eons ago. I know there are third party phone apps, but I don’t want a third party app. I want Google to grow up, start supporting first party apps, and get feature parity with Windows Phones. That sentence was a bit harse, but users of Samsung phones and their calendar app have no problems with dialing in the access code. If Samsung can do it, Google should too. I’ve been able to work around it somewhat using Multi-Window mode, where I have the phone app and the webex event opened side by side so that I can type the access code without memorizing it. I’m afflicted by first world problems.
  • Speaking about calendars, I have a calendar through my personal outlook account, where I keep track of plane tickets, train tickets, and other odd events. Turns out Google calendar doesn’t support non exchange calendars. What’s even a bigger bummer is that I can’t change the default calendar app to point to the outlook app so that my zooper widgets can pick up on them. Why Google!? Another feature set that both iOS and Windows Phone implements. Creating another calendar backend should be an intern project. Here, I’ll get the intern started: Outlook Calendar REST API reference
  • I want to be able to swipe left to bring up the App Drawer
  • Customize the lock screen just a tad bit. Maybe bring the time to lower left hand corner like Windows Phone.

Yes, scathing words were used, but if those couple of issues with the google calendar app can be fixed then I’ll write off the others as quibbling, and I’ll have no regrets.

The Camera

The hype is real here. I’m purposely taking pictures in low light!


Features I don’t use

Looking at a Verge’s article on features, here’s a list that I don’t use.

  • Google Assistant: I’m not on the voice train yet. Though I do use a very similarly named feature called the Google Wifi Assistant, which allows one to connect to open wifi networks securely using a Google VPN.
  • Google Daydream: Not on the VR train either
  • Pixel Launcher: Nope. Using Nova Launcher Prime
  • 24/7 voice support: Hopefully I’ll never have to use this!

And these features are just the ones that are unique to Google phones. Imagine the number of Android features I’m not using!

The phone has be great so far, not spotless, but I’m excited for what the future brings.

The Last Hooray, Bye Windows Phone

phone versus Windows phone vs Android vs iOS

People are always surprised when I say this, but I have a Windows phone. I’ve had one for the past four or five years, and there were only a couple regrets: the app store and a modern browser. I only found these to be minor annoyances and if Verizon hadn’t dropped Windows phone, I’d probably be getting another Windows phone. I know it sounds crazy, but I find the Windows phone interface to be the smoothest and best looking, and at the end of the day I only use a couple of apps. The built in apps are of high quality. I wish it had succeeded, but I’m not one to cling to the past. Fate has been accepted and I must look elsewhere for my next phone, though I won’t lie; I’ve fancied not upgrading. Unfortunately, at one point or another I’ll have to move off as it’s not a sustainable ecosystem.

htc one m8 HTC One M8 (courtesy of HTC’s website)

I’m writing this post as documentation so that I may look back on this post and understand my decisions. Writing down my thought process will ensure that I don’t make a hasty decision, especially when someone tries to persuade me otherwise; I can have an informed conversation. My decisions are tailored to my needs, your mileage will vary.

I’ve considered the following phones, which are offered by Verizon:

  • HTC 10
  • Google Pixel and the XL version
  • iPhone 6/7

Here’s a brief table of some specs with my current HTC one M8 included (there are many tangible and intangible qualities excluded from this brief table):

phone battery display storage camera (rear)
HTC One M8 2600mAh 5in 32GB 4MP
Google Pixel 2770mAh 5in 32, 128GB 12.3MP
HTC 10 3000mAh 5.2in 32GB 12MP
iPhone 6 1810mh 4.7in 16, 64, 128GB 8MP
iPhone 7 1960mAh 4.7in 32, 128, 256GB 12MP

Samsung rubs me the wrong way and so was excluded from the list. Their phones are not aesthetically pleasing. I also decided against looking at more exotic phones (phones not offered by verizon) because if I ever need support for a phone, I want it to be officially supported.

samsung-s7 Samsung S7 - disappointing in the aesthetic department (courtesy of Samsung’s website)

The biggest constraint I’ve felt when looking at these phones is how to incorporate my multiple Microsoft accounts into the chosen phone. For better or worse, I have hotmail, outlook, onedrive, and onenote. I can copy and paste from all these apps, like copy a meeting access code into the phone app, so I don’t have to dial the numbers individually. I don’t want to have to spend a significant amount of time downloading and configuring my phone to reach the same level of proficiency that I have currently. When switching phones, I don’t want my hand to be forced to incorporate a service I have no need for like Apple Music or Google Play Music. Not that these services are detrimental to my well-being, but I’m more interested in baby steps with acquiring a new phone as the first step. Currently, I have no desire to transition my information to google’s or apple’s services. The one advantage that google does have is that I do have a gmail account, though it lies dormant until needed for a site’s registration. I’m willing to bet I’d need it for downloading from the google play store.

Speaking of accounts, I have zero Apple accounts and share the same sense of dread towards Apple’s services. ITunes leaves a bad taste in my mouth. Some have to ask if Safari is the new IE. The headphone jack is unfortunate because I don’t want a new set of headphones and I often find myself on the train listening to music and charging the phone. Forcing me to pay for a developer license ($99 a year) to program for my phone drives me insane. One of the original reasons why I chose a Windows phone over a others is because I didn’t want to pay and I preferred C# over Java (Android). Even though I ran into early roadblocks with the lack of controls for Windows phone which persuaded me to drop development, it still allowed me that freedom if I ever chose to pick it up again. Maybe in another life I’d be an apple fanboy, but investing in that ecosystem now seems too time consuming and expensive.

iphone 7 iPhone 7 (courtesy of Apple’s website)

I am impressed that all platforms have a notion of live home screen apps (called live tiles for windows phone, widgets for android, and the widget/ screen for iOS 10). I find that I use the live tiles quite a bit because at a glance I can see the weather, calendar, stocks, news, text messages, etc without opening the respective app. Below is my current home screen.

windows phone home screen My Window’s Phone home screen, if I had any messages, emails, calls, they’d show up on their respective application

iOS loses to Windows phone here. I had to spend 15 minutes trawling google in order to not find someone complaining about this feature and to show a screenshot of more than one widget. The result is not stunning at all. Uniformity does have benefits, but the color scheme doesn’t jive. Too bad there is not much one can do to customize an iPhone.

Apple lock screen Apple lock screen (courtesy of the Mac Observer)

Android takes the cake with nearly everything being customizable. Just take a look at the videos that cover some of the best android launchers (like this one), some of these are downright beautiful and make the phone become a piece of art. Frankly, I’m jealous. The one downside is that there is a cost associated with these modifications. Nova Launcher Prime is $5 and Zooper Pro is $3. Both apps offer free versions, so it doesn’t sound like a bad deal – getting a beautiful screen for < $10 should warrant minimal complaints.

android home screen A Windows phone-esque home screen for androids (courtesy of Zooper)

So that eliminates iPhones, as they don’t seem to shine anywhere on the spectrum. It’s a shame as a large part of my family has iPhones and I hear that Facetime is nice, but Skype works fine, and I’ve only needed to video chat on my phone when I’m not by my computer, which is rarely!

On to the Androids. Early on I was leaning towards HTC 10, as it would be a natural upgrade to my HTC one M8. However, I have an issue that my Windows phone still hasn’t been updated to Windows 10, which was released over a year ago. I’m not sure who is to blame, HTC or verizon, but the result is the same: I don’t want to be without the latest updates, which often contain needed security updates. There are have been reports of HTC 10 users on Verizon networks not being updated to the latest android. Not acceptable. Considering that the Pixel and the HTC 10 are close in specs this was the dealbreaker for HTC 10.

htc 10 HTC 10 (courtesy of HTC’s website)

I’m deciding to go with the regular google pixel. It has the same specs as its larger brother, but has a 1080p screen instead of QHD, which is fine by me. The pixel is the same size as my current HTC one M8, so it should be a size I’m used to. Compare that to the XL, which is an additional half an inch, which doesn’t sound like a lot, but I’m sure my little fingers couldn’t stretch the distance. Not to mention the regular pixel is $120 cheaper.

The next question. Order it directly from google or verizon? The only solid evidence I’ve found against verizon is that the bootloader is encrypted and can’t be circumvented without voiding the warranty. I don’t plan on running an alternative OS, so not a strong disadvantage. People have speculated that updates from verizon will be slow or the phone will be bloated. On the contrary, updates will be pushed at the same time as google’s pixels, and there will be no bloatware if the phone is initialized without a verizon sim card. If a pixel is direct from Google, on the other hand, you may find that insurance on the device is limited, whereas verizon may offer fuller coverage.

google pixel Google Pixel (courtesy of Google’s website)

One thing that I have found to be bizarre about Android devices is that there is not a default messaging app. Both Windows phone and iPhones have default messaging apps, why not Android? Here’s the top 10, but wait, here’s the top 16, but wait, what is installed on your phone initially is the carrier’s app. Thankfully, it looks like google has published one, so I think I’ll use that (or signal), but the sheer number of choices for something so fundamental has made me question what else I’ll need to look for (I just checked, and should be good to go with an alarm clock).

Circling back to what I thought would be a big constraint: there appears to be high quality and well maintained apps for Android by Microsoft. So I’ll be able to set up outlook and onedrive with no problem. Doesn’t seem like such a hurdle now!

Yup, so it’s the google pixel for me.