Thursday, July 7, 2016

Sprint 3 - Accountabilibuddy

Last sprint, I admitted to having gone off track from doing requirements first, writing one feature at a time, and diligently merging features. So in order to keep myself accountable, I'm going to post my requirements for the sprint here first and you can be my accountabilibuddies. Sadly, I know better than to go rogue, but sometimes it's just so fun!

Here are my requirements for Sprint 3. Following the completion of this Sprint, I should be fairly close to having the data needed for the rules engine. From there, it is a matter of interpreting the event I am already receiving, figuring out who should receive a message, and then integrating with Twilio. After that, some refactoring, a deployment guide, and some automated testing will finish off the project...

Sprint-3-01 - Connect to a Mongo Database

As a… developer,
I want to… connect to a database,
So that… I can permanently store data that will be used by my application.

Sprint-3-02 - Establish login sessions

As a… security minded person,
I want to… have the ability to login,
So that… some data is not readily available to non-authenticated users.

Some API endpoints will not require authentication, others will. To gain access to secure endpoints, a user will click on a login button and be brought to the login screen. Once they enter the Dexcom password, a token will be issued and kept in the sessions collection in the database. If a call is made to a secure API, the request header must have an “Authorization” and the session token provided by the login. If the system cannot find a session or the session is not present in the header, or the session has expired;, the system will return a 403 error. By default, the session will be valid for 24 hours since the last interaction with the system. Every time a secure endpoint is accessed, the last used value of the session will reflect the current time. The last used value is to be used for timeout calculations.

Sprint-3-03 - Secure Updates

As a… security minded person,
I want to… modify the existing update mechanism,
So that… I can continue to use push events, but differentiate to events that are public versus updates that are intended for admin users only.

If the client does not provide an authorization header when connecting an EventSource to the /api/update endpoint, the response will be cached on the server as it was. For push events that are considered public, all cached responses will be sent the event. However, if the EventSource has an “Authorization” in the header, the token will be compared to the sessions collection to ensure its validity. If it is not valid, a 403 should be sent. If it is valid, the response will be considered a secure event stream. When the update method is call and the adminOnly flag is set to true, only the responses cached that are considered secure will receive these updates.

Sprint-3-04 - Add/Update/Delete Message Recipients

As an… administrator to the service,
I want to… add, update, or delete someone who will receive alerts,
So that… I can maintain who is receiving alerts.

Once logged in, the UI should have a dedicated page to add, update, or delete intended recipients. This page should not be available if the user has not authenticated themselves. Basic information that should be viewable/editable includes:

  • Name (minimum 2 characters)
  • Phone number (must be exactly ten digits - numbers only)
  • Expiration Date, checkbox for never expire - if unchecked - display a calendar to pick a date
  • Include weekends and holidays
  • Button to view/update current rules

Sprint-3-05 - View/Edit/Update Alerts

As an… administrator,
I want to… view, edit, update alerts for users,
So that… I can configure rules based messages.

From the user screen, I want to be able to click a button and add, delete, and/or modify existing rules. Rules should be comprised of:
  • Start Hour
  • Start Minute
  • AM/PM indicator
  • End Hour
  • End Minute
  • AM/PM indicator
  • Message type: text, call, or both
  • Checkboxes for event types:
    • High and box for high value (verify only number > 150), box for repeat every x minutes (must be greater than 5)
    • Low and box for low value (verify only number and > 40 and < 100), box for repeat every x minutes (must be greater than 5)
    • Double Up, box for repeat every x minutes (must be greater than 5)
    • Double Down, box for repeat every x minutes (must be greater than 5)
    • No Data and box for minutes without data, box for repeat every x minutes (must be greater than 60)

Sprint-3-06 All edits must be “live”

As an… administrator,
I want to… see any updates made by another administrator,
So that… I am always looking at the most current data.

If I am logged in and looking at the users screen and another administrator edits or deletes data, the change should be reflected in my current view without me refreshing the browser.

Tuesday, July 5, 2016

Sprint 2 - Environments

Excuses...

I had fully intended to write a blog and complete a sprint every week, but... It had been a cold, rainy and thoroughly unenjoyable summer thus far here in Central Texas. And then, the weather got nice. I decided to spend some time jet skiing on the lake with the family instead of working on my messaging service. So a few weeks have gone by and I hadn't written anything. I had intended to write a very simple JavaScript application that would run 100% in the browser to be able to monitor the current Dexcom reading, but found out that I would be reliant on using the service I wrote in Sprint 1 - the browser version was a non-starter since it requires certain headers in place to call Dexcom that browsers will not allow. I abandoned all Agile discipline and just started writing the client piece without creating requirements first. I really should fire myself, but... I do have a demonstration of working software, so I will not be to mad at me.

If you point your browser to https://www.thezlotnicks.com you can see, in real time; Zoe's current glucose level, her trend, and how long until the next reading. It looks simple, but the following items were covered in this sprint:


  • Established an environment for a Single Page App - ReactJs based client
    • All pages, stylesheets, and javascript libraries are loaded on the initial call
    • Subsequent calls require only a very light weight exchange of JSON objects between the client and server
  • Built in a persistent event listener into a Flux store
    • Any change in state can be pushed from the server to connected clients
    • On a disconnect, the client knows it is disconnected and the last known event it had received
    • The client will attempt to reconnect to the server within three seconds of a known disconnect
  • Created a poor man's continuous integration environment using GitHub's action hooks
    • Once the code is working in the development environment, a simple git push will relay a message to the production environment
    • The production environment will run a shell script to get the latest changes, run npm install, and restart itself
  • Configured nginx to properly route the client side javascript and assets and calls to the api to the node based Express server
    • It would be possible to take the client based assets and place them on a Content Delivery Network, but this is rather simple example, so having nginx serve the static content seems reasonable
    • Additionally, nginx is rerouting all requests from http to https and I have installed a valid SSL certificate on the server for secure communication
  • The client now allows any browser to establish a connection and receive a simple display
    • This will act as the foundation piece for subsequent sprints covering stories to add message recipients, configure message rules, maintain a holiday/vacation calendar, and provide alert acknowledgements
So if any or all of the above seems like technical jargon, I'm going to break it down...

Single Page Apps

Back when the whole Internet was brand new, there were a bunch of static websites made up of HTML text. A static website essentially means that the text of the HTML document did not change. This was OK for doing some simple marketing pages that were called brochureware, but to do anything more complicated, there was a need to make dynamic pages.

A dynamic page means that the content of the page will be established by certain parameters. If a user had a cookie, or the querystring had a parameter, etc. the page would display differently. For example, if I were making an e-commerce site and had a search box, the results displayed in the search results would depend on the user's input into the box. There would be no way I could know ahead of time all the permutations a user might enter, instead, the user's input would go to an application server. The application server would query a database, receive the results, and create HTML on the fly to send back to the user.

For the last twenty plus years, most applications were built on a three tier architecture consisting of a client (or browser), an application server, and a database. From 1993 until the last six or so years, creating a dynamic page looked like this:


1. The browser makes a request to an application server. The request header may contain an authentication or bearer token which establishes the user's identity or not.

2. The application receives the request. If there is an authentication token, the server can then do a lookup on the user who is making the request and make sure the user is authorized to do so. If information from the database is required, the server issues the database query.

3. The database receives the query from the server, executes the query, and then...

4. The database sends the query results back to the server.

5. The server takes the results of the query, packages together all of the javascript libraries and css sheets necessary to display the information, and dynamically creates HTML that is then sent back to the client.

6. The client receives the HTML and renders the page.

All of this was OK, but it was inefficient. Web pages were kind of, well blah. Content got really stale very quickly and looking at a web page was a lot different than looking at an executable program. Along came some JavaScript libraries and things started changing.


Here is the flow for a Single Page App:

1. Instead of calling an application server, the browser receives an index.html, an app.js, fonts, and stylesheets from a Content Delivery Network. A Content Delivery Network is specifically designed to cache static assets and serve them to requesters with a minimal number of hops and as geographically close to the requester as possible. The Content Delivery Network is optimized in a manner to provide the response as quickly as possible and avoids routing the requests to a server that has more important work to do.

2. The browser receives the static content and then makes a call to the application server.

3. The request to the application server may have authentication and instructions to receive dynamic content for a home page.

4. The server processes the request and calls the database (as before).

5. The database receives the query from the server (as before).

6. The database sends back the results (as before).

7. The server packages up the query as a JavaScript Object Notation Object (JSON). Now, instead of taking the query results and creating some HTML on the fly, the server is going to send back a JSON.

8. The browser receives the JSON from the server and figures out where to put the data.

9. Subsequent interactions with the browser may or may not result in steps 3 - 8 occurring. Under the first diagram, if we had a table with columns and wanted the data sorted by a particular column, we would have clicked the column and that would have kicked off a call to the server, the database would be queried (again), and HTML would be recreated to show the table with the new sort order. Additionally, the client would receive the same JavaScript libraries and stylesheets and the screen would refresh. With a Single Page App, all of this would be avoided. The work of changing the display would actually happen on the client. As a result, our server sees less requests improving our scalability and the user gets an almost instantaneous response. Win-win! If the client needs more or different information from the server, it can repeat steps 3-8, but from here on out; the client and server are only exchanging for lightweight JSON object payloads. Instead of downloading kilobytes of data on every request, the client and server are exchanging literally bytes of data.

React, Flux, and Server Sent Events

By basing the client portion on a Single Page App architecture; hopefully I have demonstrated how scalability can be improved, response time is better, and the overall experience is enhanced. I chose one of the newer JavaScript libraries, React, as I think it fits well with my overall design principles. React was open-sourced by FaceBook in 2013 and has created a well deserved following. It is more or less competing with Angular to be the dominant JavaScript library. I am not going to compare and contrast the two, as they are both well supported and widely used. Either one is a decent choice.

React is a vast improvement over some of the power that jQuery provides. With jQuery, a developer can make a dynamic change to a certain part of the Document Object Model (DOM) without redrawing the whole page. React takes this principle and creates a Virtual DOM. When a page is created in React, there are certain variables that we expect to change by manipulating its state. Any time a setState method is called, the Virtual DOM is modified and React compares the actual DOM to the virtual one and automatically refreshes the changed components. In my demo, you can see the timer changing every second. This is done rather simply by modifying the state of my component. The only part of the screen that changes is the portion where the counter is. As the seconds tick, nothing changes but the time. Cool!

Flux is a pattern recommended by FaceBook for developing React applications. Components get their data from stores. Stores are modified via actions. When a store changes its state, any component that is listening receives the update and the data is redrawn. In my demo, in the store that holds the data for Zoe's glucose, I have connected an EventSource object. This EventSource object holds open a text/stream between the client and the server. On the server side, I have a collection of responses and add or remove to this collection as clients connect and disconnect. Since the server is written in Node and Node is asynchronous, this connection does not hamper performance. For the most part, both client and server have an open connection with nothing coming through. However, when an event happens, I can get every client with a connection to automatically update themselves by writing to this connection. The user never needs to refresh the browser to see new information, the EventSource will attempt to reconnect itself every three seconds in the case of a disconnect, and the client and server are always in synch. Refreshing the browser continually for new data is so 2012.

Continuous Integration and Environments

One of the very worst jobs I have ever had was doing the migration from SharePoint 2007 to SharePoint 2013 on behalf of Australian retail giant JB Hi-Fi. It was the inspiration for my most widely read blog post to date entitled SharePoint is a Colossal Piece of Shit and Should Not Be Used By Anyone. The project was a cluster from the get-go. The IT department was run by Geoff at JB Hi-Fi. Geoff had no idea about how to run an IT department, but he knew how to bust a vendor's balls. Somehow, he let a no name "consulting" company provide a revolving door of developers to create their internal applications. Everything was done fixed fee for next to nothing, giving the developers every incentive to cut corners. They initially loved the first few developers, but got increasingly unhappy with each developer thereafter. Geoff, having no real experience in IT, didn't seem to grasp that maintaining someone else's code is actually harder than writing it from scratch. At some point, the entire thing became such a garbled mess that he decided to "upgrade" from 2007 to 2013 because that would magically fix all the memory leaks and other sorted issues.

By the time I got on the scene, half the upgrade budget had already been spent with nothing to show for it. I got handed a desk and told to go to it, but it took some time to figure out what "it" actually was. Apparently, their method of source control was to stick everything in a folder called "SoPac Solutions (do not delete)" (SoPac is now defunct having filed for bankruptcy). Within this folder was a series of random C# command shell programs that were scheduled in batches. I spent a good two months manually moving and testing these programs to the new 2013 environment while simultaneously being bombarded by support issues for the 2007 version. Except, there was no test environment. Bugs were reported in production and I was expected to fix them in production. My only mechanism to fix the bug was then to attach a debugger to a production system and spend hours following the code until I could figure out what needed to be changed. This is perhaps the worst way to go about doing things.

The Right Way To Do It (tm) is to have a series of independent environments under source control. For the most part, any bug is going to be caused by either data or code. If a production bug is found, the production copy of the code should be copied into a new system, a backup of the data should be attached to the copy, and voila! a mechanism for debugging and not taking down the production system is in place. Further, it is pretty normal to be working on a feature and have it working, but it causes problems someplace else. The Right Way To Do It (tm) is for each developer to have their own environment independent of everyone else. The developer writes their code and unit tests. When satisfied, the code is then moved from their local environment to a test environment. If you feel like being fancy, the test environment can kick off some integration tests and report if the new migration was successful or not. Each developer works independently only to have it merge into a test or system integration environment. From there, it should be thoroughly tested before going to a User Acceptance Testing (UAT) environment. Once the new code has been verified by QA and the subject matter experts, then and only then, should it be pushed to production.

SharePoint makes this just about impossible, but here writing a simple Node and React based application, I am able to put my source code in Git, publish it to GitHub, and I enabled an active hook from GitHub. When I do a commit to either the client code or the server code, GitHub issues an encrypted post to my server. If it passes the validity test, it runs a shell script to update the npm packages, for the client it does a grunt build, and for the server it restarts forever. I now have the ability to work away on a new feature on my trusty Macbook Pro and migrate it with a simple command line. Some cloud providers have some really cool deployment tools, but my home grown version is running on a $5/ month server and does what I need it to. The process of checking in code and having the new environment receive the changes and update itself is called continuous integration.

NGINX

In my local environment, I run my application server listening to port 3000. To hit the API, I can issue a curl to http://localhost:3000. This is all fine and good, except when I am running in production, I want my service to be bound to https://www.thezlotnicks.com/api. Further, I want any browser issuing a get to http://www.thezlotnicks.com to be forwarded to https://www.thezlotnicks.com and download the index.html and app.js that result from my grunt build. Fortunately, there is a free open source commonly used product called Nginx (pronounced Engine X) that acts as a reverse proxy. It can route the calls to /api to the node server and serve up my static content as well with a small amount of customization. If you are curious, the following gist has the config file for Nginx. It took me a little while to figure out why my event streams kept closing in production, but with the addition of line 35, it solved the problem...

https://gist.github.com/PokerGuy/5b3a26f67e6f2e54f7faabf2f4796ea8

And with that, everything is in place for a working server providing polling to Dexcom, push notifications to any connected client, continuous integration, and an API. I also created a repository for the client code which can receive notifications from the server, written as a single page app, and never requires a browser refresh. The next sprint will cover connecting to the last tier in the three tier architecture and then finally sending out some conditional alerts and acknowledgements. Happy coding!

Server Code:
https://github.com/PokerGuy/dexcom-share-messenger

Client Code:
https://github.com/PokerGuy/dexcom-share-client