Tuesday, December 6, 2016

What Can I Do To Put You In A New Car, Today?

Let’s get some housecleaning out of the way, starting with the definition of the word “libel”.

li·bel

 [lahy-buhl]  Show IPA noun, verb, li·beled, li·bel·ing or ( especially British ) li·belled,li·bel·ling.
noun
1.
Law.
a.
defamation by written or printed words, pictures, or in any form other than by spoken words or gestures.
b.
the act or crime of publishing it.
c.
a formal written declaration or statement, as one containing the allegations of a plaintiff or the grounds of a charge.
2.
anything that is defamatory or that maliciously or damagingly misrepresents.
verb (used with object)
3.
to publish a libel against.
4.
to misrepresent damagingly.
5.
to institute suit against by a libel, as in an admiralty court.
(from Dictionary.com)

Strangely, as a mere amateur writer, I get accused of libel a lot. While I am prepared to say things that are damaging, I will not misrepresent. All instances have sources. All information has been verified by multiple independent collaborations. Further, I can’t make this shit up, I’m not that creative. It’s not libel if it’s true.

From the good folks at LegalZoom.com:

If the statement is not protected as an opinion, you may still be protected under the truth defense. A person who wishes to successfully sue you for libel must generally prove the statement is false. In most states, truth is a complete defense to a libel action. You generally can't sue if the statement in question is true, no matter how unpleasant the statement or the results of its publication.”

There is a great story of the painter Pablo Picasso. He painted his seminal work “Guernica” (or “War” in Spanish) during the Spanish Civil War. The piece was a protest against the war in general and fascism in particular. When the fascists broke into his studio and asked him, point blank, “Did you make this painting?”

Picasso responded simply, “No, you did.”

“What Can I Do To Put You in a New Car Today?”

I like The Kid for the most narcissistic of all reasons. He reminds me of me. I like me, therefore, I like The Kid. Most people wouldn’t see it and say, “You two are nothing alike!” I am bigger. Louder. Obnoxious-er. The Kid is quiet. Cerebral. Up in his own head a lot. I look like an ex-football player. The Kid is built for marathons. All that stuff is superficial. When you get done to underpinning philosophies and the way we think, The Kid and I are completely sympatico.

I met The Kid when I was his manager at the most toxic of all the small parasitic consulting companies in the Seattle area that used to feast off the large host body, Microsoft. We got along pretty well and he always did good work. One of the many reasons I told “Toxico” to fuck off was I gave The Kid a good review. They didn’t like my review, they had him rated poorly. I opened his review and reread it and decided I didn’t want to change a single word and their ratings were nothing more than a popularity contest. They didn’t like the way The Kid dressed. They didn’t like that The Kid wasn’t outgoing. They ignored the fact that he actually got shit done with minimal direction.

The Kid had some problems. I talked to The Kid. He acknowledged the problems and said he would work on them. He wasn’t defensive about it. Our talk was good enough for me. I didn’t write a review that glossed over this and I didn’t rate him number one. It was fair and honest. Ironically, the next year, The Kid had a new manager that thought his job was to promote his direct reports. The new manager wrote a review that said The Kid was the second coming of Jesus Christ himself. For whatever reason, they bought into this new evaluation of The Kid and had him ranked at the top of his class. Sorry, Kid, I didn’t talk you up enough - that’s on me. Right when The Kid had the number one ranking, he quit. Yeah, I like The Kid.

The Kid is not a sycophant, so I never thought he was nice to me because I wrote his review, but then again, you never know how someone feels about you when you have a modicum of power over them until you don’t. There’s no reason for The Kid to keep in touch with me at all unless we actually do get along on some level. So, I was sitting at work, minding my own business, when The Kid IM’ed me via FaceBook. He just realized I was back in town after my two year walk about/midlife crisis in Australia and he was positively giddy over his purchase of a new Tesla Model S.

Side note, I have said a couple of things over time to Julie that have been radically misinterpreted. One of these things was when I told her that if we didn’t have kids we would have money fights. She was hurt by my statement. I clarified, “No, what I meant was if we didn’t have kids, there would be piles and piles of money everywhere. Occasionally, for no real reason, I think I would be tempted to pick up a huge wad of cash and throw it in your general direction ala Scrooge McDuck. I would laugh and laugh as I watched the cash balloon into a small money cloud and then we would roll around in all that money.”

My explanation placated her. Actually she found it kind of funny. Needless to say, The Kid does not have kids of his own and hence the Tesla Model S. So we swapped some messages and agreed to meet up at The Matador in Redmond.

The Matador inhabits the exact same spot that was once occupied by Big Time Pizza. Big Time Pizza once was a family friendly pizza restaurant and Julie and I celebrated our first few anniversaries there. Unceremoniously, it closed and I was disappointed. I drove repeatedly past the spot in the middle of Redmond and saw the building for a long time sitting sadly abandoned. One day, a sign popped up announcing The Matador opening soon and promising to bring a tequila bar to Redmond.

The Matador eventually opened and I didn’t care. My life had changed. I was wiping my kids’ asses not chasing asses in tequila bars. Yet, slowly, the reputation of The Matador reached even me. First, you have to understand that Redmond is not a happening place. Seattle has a scene. Even Bellevue has a scene. But Redmond, historically, was a sleepy farming community with a reasonable commute to Seattle. Things changed a bit when Microsoft put their corporate headquarters in Redmond, buying up huge chunks of land, and establishing a sprawling campus. Still, younger Softies tended to live in Seattle and commute to the Eastside. Softies with families may have lived in or around Redmond, but no one went out in Redmond. Yet The Matador managed to establish a mini scene where none had formerly existed.

The thing about Redmond is that it is sort of dominated by Microsoft. The thing about Microsoft is they don’t put much stock in physical appearances. I used to joke that if Playboy ever did a girls of Microsoft issue, it would be a pretty sad issue. The women at Microsoft wear sweatpants, tennis shoes, and scrunchies. They wear little to no make up. They may go to the Pro Club, but it’s only to drop their kids at daycare and drink lattes - never to work out. The whole town kind of adopted that attitude. Except when The Matador opened, I started hearing rumors of “talent” in Redmond.

Talent is sort of a loaded word. Boys in their twenties don’t talk about talent. They are out trying to get laid. It’s guys who are already married and looking down the barrel of middle age who start talking about talent. Here is where I get a bit confused as I’m not exactly sure what I’m supposed to do about talent? Do I stare and oggle? Do I take a quick peek and avert my eyes? Am I supposed to just be happy to bathe in the glory of an attractive younger woman who smiles up me because she works for tips? Am I supposed to flirt?

Guys my age start doing some goofy shit. Some of us get obsessed with golf. Some take fantasy football way too seriously. Others obsess over talent. While I try my best to ignore it, others make it their second job to become talent scouts. They know where the talent it is, where it used to be, and where it is going.

Joey’s on Lake Union and Bellevue are known for talent. All the bars and restaurants in Seattle and around the one cool pocket in Bellevue are known for talent, but talent in Redmond? Impossible! Yet, I kept hearing rumors that there were real girls going to The Matador that wore their hair out of ponytails, they wore high heels, and even - dare to dream - skirts. So The Kid, who is really no longer a kid and getting close to middle age himself, and I met at The Matador.

We ordered dinner and had a couple of beers. Attractive waitresses brought us our food. We talked a lot about cars and the consulting industry. Strangely, the two wildly disparate subjects somehow merged. Not only has Tesla changed how cars are designed, moving from a combustion engine to an electric engine, but they are changing the way cars are sold. Traditional car sales go from the manufacturer to an independent dealer where Tesla has showrooms and then orders cars on behalf of the customer.

“What’s going on in Texas is ridiculous,” said The Kid.

Texas state law actually requires car manufacturers to sell cars through dealers. If a Texan wants a Tesla, they can go to a Tesla showroom, but the people working there cannot mention that they can buy the car. If the customer goes out and orders the car, Tesla will put the car in an unmarked truck and drive it out from Louisiana. The independent dealers are up in arms as they, rightly, perceive a threat to their livelihood.

I laughed. “Kid, I remember a time when I got my first job. Being twenty-two and having just a little bit of money, I wanted my first purchase to be a Convertible Z28 Camaro. I took about a month off between college and starting my job I was sitting around the house, and my dad is kind of a car guy, so he said, ‘Let’s go to the Chevy dealership and test drive some Camaros.’”

“I didn’t want to buy the car in Arizona knowing that I’d be moving to California, because California had already started their special ‘California Emissions’. But, we were both bored, and I had never even driven a Camaro, so we went to the dealership. Mind you, I wanted a Z28 convertible in either red or black with a six speed. I talked to some loser car salesman and before you know it, I was driving a white V6, that was not a convertible, and an automatic. I didn’t mind as I had no intention of buying the car and I thought I could at least get a grip for how I fit in the car and how it handled a bit.”

“So I drove the V6 and tried to imagine how it would feel if it were an 8 cylinder. The test drive was over and I was ready to go, not really sure what I got out of the experience, but I realized that social protocol now dictated that I was supposed to pretend that I was interested in actually buying the car. Before we knew it, my old man and I were literally in a glass room in the back of a car dealership. The sales guy said just wait here a moment and disappeared for a while. He came back with some brochures and he literally looked at me and said, ‘So what can I do to put you in that car today?’”

“The honest answer was nothing, because I needed to buy the car with California Emissions, but my jaw just dropped open. I told him I was looking for a red or black six speed convertible V8. What I drove was a white fixed roof V6.”

“He was unperturbed. ‘What can I do to put you in a new car today?’ he repeated.”

“Really, that was all I needed to know about ‘independent dealers’. They have no desire whatsoever to match a consumer with a car they want. They have a very strong desire to match a consumer with a car they have.”

We laughed a bit talking about how the dealers say they are consumer advocates. The Kid told me about his experience buying a Tesla and it was so different from high pressure based on available inventory. He paused and brought it back full circle, “Really though, it’s no different from consulting firms. They have no interest in putting the ‘right’ consultant on a project. All they care about is getting people off the bench and billable as fast as possible.”

I jumped in. “It’s funny because somehow companies that don’t have to use middlemen are convinced that it’s in their best interest to use them.”

The Kid and I had both witnessed Microsoft’s roll out of the “Approved Vendors List” (AVL). In 2009, Microsoft’s procurement department had decided there were “too many vendors”, so procurement decided to unilaterally create an AVL which would, in theory, lower cost. A handful of vendors were secretly blessed to be on “the list”. The criteria for selection was not clear and the vendors that made the cut were oftentimes questionable. For those who did not get approved, there was no communication. One day everything was fine, and the next day their purchase orders were being rejected from the internal application without an explanation.

Sadly, this passive-aggressive management of vendors caused some really hard times for people. One day they had a decent, hourly gig at Microsoft. Then, without warning, they didn’t. All because the “consulting company”, and I’m using that term loosely, was no longer on the magically approved list. All sorts of weird shit started happening during this timeframe. I almost directly profited handsomely from it because I got a call from a friend at a vendor off the list and was brokering a deal to essentially pass the contract through an approved vendor with a mark up. How these deals and markups were beneficial to Microsoft is really beyond my comprehension, but this really happened.

In the end, those who were on the list now faced lowered competition. Those who were on the outside looking in had to jump through hoops and kiss the right rings to even have a chance to compete.

Why the procurement department thought they knew how to select the “right” consulting agencies and to negotiate the rates is the epitome of corporate arrogance. My feeling is that the mission of the procurement department is to justify its own sorry existence. A bunch of people who have no idea what the “consultants” who will be brought in will actually do go to bigger “consulting” firms and ask for a discount on the rate card. Having checked a box on their year end goals to reduce cost by X%, the edict was out to use the AVL, and all further studies into the effect of said AVL were never conducted.

In reality, the “consulting” firms on the magic list did not offer their best resources to Microsoft as they were no longer able to get the premium rates. The net effect was the people within Microsoft who actually hire and depend on the “consultants” had to go to medium to bigger firms. These firms no longer had competition and started to send their B team to Microsoft. While the hourly rate went down, they were getting lower quality candidates which probably increased timelines (costing more money) and significantly adding to risk. Sadly, this is what happens when you introduce a middle-man into the ecosystem, just like when a consumer goes out to purchase a vehicle through a dealer.

In effect, the consulting companies had a license to say, “What can I do to staff a consultant for you, today?”

Monday, November 14, 2016

Why Open Source?

There was a period of time, roughly sixteen years ago, that for the life of me I could not understand why someone would spend their time on open source projects. A decade and a half later, I have flipped my opinion to the point where I cannot understand why some people have such a deep seated distrust of open source.

We are now over a decade past when then CEO of Microsoft, Steve Ballmer, referred to open source as a “cancer”. There was a point of view that sincerely believed that something given away, maintained by a community, and with no central authority could not possibly be as good as something that was built by professionals. I now truly believe the opposite.

Open source does not depend on security through obscurity. With the source code readily available for viewing by anyone, it better be secure. Further, there is a different mindset in the open source community versus the closed source. In the open source world, anyone who finds a security flaw is hailed as a hero (Example of the  Postgres community asking for people to break their security model). The flaw is patched and the software is up to date and more secure. In the closed source world, finding a flaw makes one a “hacker” and possibly a criminal. A company can sue an individual for even attempting to find a flaw (Example of Oracle threatening to sue customers).

As a software professional, I can sort of understand the point of view that simply giving away a product degrades what it is that I do. I tend to look at it and in a different light. I do not see myself as a master of everything. I depend on certain raw materials to make things work. I may take a database from Postgres or Mongo, an API server from Java or NodeJs, and a front end library from Facebook’s React. I did not create any of these materials, but I take them, work with them, and assemble something with them that I find useful. Even without making each of these components from scratch, assembling them is a skill and that skill has value.

In a goofy analogy, I see myself as a general contractor. I get to go to Home Depot and get a bunch of raw materials for free, but making a house out of the materials is on me. Having free material does not take away from the ability to build something with the material. Instead, it builds a community of expertise from which I can draw on to build better finished products.

Further, there is greater flexibility in the components offered in the open source world. In a previous life, I had selected a JavaScript library for displaying tables with the requirement that the table had to have pagination, searchability, and sortable by column. Later, I was told that the library was not 503(c) compliant as the user needed to be able to tab from the search box to a column, click enter on the column, and have the table sort in the same manner as if a user had clicked the column with a mouse. Instead of being thrown by the new requirement, ripping the library out, and starting over; I modified the code and had the new requirement working in about half an hour.

So open source creates a bunch of really great components from which cool things may be built. It may very well be more secure than closed source code and it is definitely more flexible and modifiable. But why in the world would someone dedicate their time to something which will inevitably just be given away? While not speaking for everyone, I can speak for myself...

CONTINUING EDUCATION

My first real job out of college was at Deloitte and Touche. At one point, there were six major accounting firms in the United States. These firms were so big that they were known collectively as the “Big Six” (consolidation has now brought it to the “Big Four”). The accounting companies had consulting arms, but being born of an accounting firm there was an emphasis on the idea of “continuing education”.

I really love this idea. Just because I know how to do something now with an existing set of tools does not mean that I have been exposed to everything. In the last few years; I have done work in Java, Ruby, and NodeJs. I have built ReSTful APIs. I have used MySQL, Postgres, and Oracle databases. I have used jQuery, ReactJs, and Angular. I taught myself how to use map/reduce and deployed projects onto Azure and AWS.

Five years ago, I really struggled with the concept of NOSQL vs a traditional relational store. It took a lot of soul searching, reading, and experimenting to come to a more pragmatic realization that they both have their place and the real trick is knowing when to use each. Without hands on experience, I am not sure that I would have ever gotten there.

DOING SOMETHING GOOD

In a certain sense, in a capitalist society, simply producing something is doing good. However, I would like to do more. In my goal of doing an open source project a year, I have raised over $2,000 for the Juvenile Diabetes Research Fund and am currently working on a monitoring system to help parents care for their children with diabetes. While I believe the monitoring system is worth something monetarily, I cannot sell it as it would require a bunch of regulatory compliance with HIPAA. I can give it away and help other parents and that actually feels better than commercializing the product.

Those of us caring for T1Ds are all in this together. I would sincerely rather help than profit.

FOR THE LOVE OF THE GAME

For the last several years, I have developed for a living. It would be nice to think that I get paid to do something that I love. However, it is never the coding that is difficult. It seems like every professional job I have been stuck “Throwing the Slant”. It feels great to be able to decide what I write, how it gets used, and interact with people directly who use my software.

SHAMELESS SELF PROMOTION

I may be giving away software, but I am also allowing people to see how I think and work. Tech interviews absolutely suck, building a portfolio and having something to talk about in a tech interview has done wonders for my career. The last two job offers I have heard that the developers were literally on my github page checking it out.

What if someone doesn’t like the way I code or think it sucks? No problem. I would rather not get an interview. If you like it, let’s talk. If you don’t please pass and we will save each other some time. It’s a win win.

Thursday, July 7, 2016

Sprint 3 - Accountabilibuddy

Last sprint, I admitted to having gone off track from doing requirements first, writing one feature at a time, and diligently merging features. So in order to keep myself accountable, I'm going to post my requirements for the sprint here first and you can be my accountabilibuddies. Sadly, I know better than to go rogue, but sometimes it's just so fun!

Here are my requirements for Sprint 3. Following the completion of this Sprint, I should be fairly close to having the data needed for the rules engine. From there, it is a matter of interpreting the event I am already receiving, figuring out who should receive a message, and then integrating with Twilio. After that, some refactoring, a deployment guide, and some automated testing will finish off the project...

Sprint-3-01 - Connect to a Mongo Database

As a… developer,
I want to… connect to a database,
So that… I can permanently store data that will be used by my application.

Sprint-3-02 - Establish login sessions

As a… security minded person,
I want to… have the ability to login,
So that… some data is not readily available to non-authenticated users.

Some API endpoints will not require authentication, others will. To gain access to secure endpoints, a user will click on a login button and be brought to the login screen. Once they enter the Dexcom password, a token will be issued and kept in the sessions collection in the database. If a call is made to a secure API, the request header must have an “Authorization” and the session token provided by the login. If the system cannot find a session or the session is not present in the header, or the session has expired;, the system will return a 403 error. By default, the session will be valid for 24 hours since the last interaction with the system. Every time a secure endpoint is accessed, the last used value of the session will reflect the current time. The last used value is to be used for timeout calculations.

Sprint-3-03 - Secure Updates

As a… security minded person,
I want to… modify the existing update mechanism,
So that… I can continue to use push events, but differentiate to events that are public versus updates that are intended for admin users only.

If the client does not provide an authorization header when connecting an EventSource to the /api/update endpoint, the response will be cached on the server as it was. For push events that are considered public, all cached responses will be sent the event. However, if the EventSource has an “Authorization” in the header, the token will be compared to the sessions collection to ensure its validity. If it is not valid, a 403 should be sent. If it is valid, the response will be considered a secure event stream. When the update method is call and the adminOnly flag is set to true, only the responses cached that are considered secure will receive these updates.

Sprint-3-04 - Add/Update/Delete Message Recipients

As an… administrator to the service,
I want to… add, update, or delete someone who will receive alerts,
So that… I can maintain who is receiving alerts.

Once logged in, the UI should have a dedicated page to add, update, or delete intended recipients. This page should not be available if the user has not authenticated themselves. Basic information that should be viewable/editable includes:

  • Name (minimum 2 characters)
  • Phone number (must be exactly ten digits - numbers only)
  • Expiration Date, checkbox for never expire - if unchecked - display a calendar to pick a date
  • Include weekends and holidays
  • Button to view/update current rules

Sprint-3-05 - View/Edit/Update Alerts

As an… administrator,
I want to… view, edit, update alerts for users,
So that… I can configure rules based messages.

From the user screen, I want to be able to click a button and add, delete, and/or modify existing rules. Rules should be comprised of:
  • Start Hour
  • Start Minute
  • AM/PM indicator
  • End Hour
  • End Minute
  • AM/PM indicator
  • Message type: text, call, or both
  • Checkboxes for event types:
    • High and box for high value (verify only number > 150), box for repeat every x minutes (must be greater than 5)
    • Low and box for low value (verify only number and > 40 and < 100), box for repeat every x minutes (must be greater than 5)
    • Double Up, box for repeat every x minutes (must be greater than 5)
    • Double Down, box for repeat every x minutes (must be greater than 5)
    • No Data and box for minutes without data, box for repeat every x minutes (must be greater than 60)

Sprint-3-06 All edits must be “live”

As an… administrator,
I want to… see any updates made by another administrator,
So that… I am always looking at the most current data.

If I am logged in and looking at the users screen and another administrator edits or deletes data, the change should be reflected in my current view without me refreshing the browser.

Tuesday, July 5, 2016

Sprint 2 - Environments

Excuses...

I had fully intended to write a blog and complete a sprint every week, but... It had been a cold, rainy and thoroughly unenjoyable summer thus far here in Central Texas. And then, the weather got nice. I decided to spend some time jet skiing on the lake with the family instead of working on my messaging service. So a few weeks have gone by and I hadn't written anything. I had intended to write a very simple JavaScript application that would run 100% in the browser to be able to monitor the current Dexcom reading, but found out that I would be reliant on using the service I wrote in Sprint 1 - the browser version was a non-starter since it requires certain headers in place to call Dexcom that browsers will not allow. I abandoned all Agile discipline and just started writing the client piece without creating requirements first. I really should fire myself, but... I do have a demonstration of working software, so I will not be to mad at me.

If you point your browser to https://www.thezlotnicks.com you can see, in real time; Zoe's current glucose level, her trend, and how long until the next reading. It looks simple, but the following items were covered in this sprint:


  • Established an environment for a Single Page App - ReactJs based client
    • All pages, stylesheets, and javascript libraries are loaded on the initial call
    • Subsequent calls require only a very light weight exchange of JSON objects between the client and server
  • Built in a persistent event listener into a Flux store
    • Any change in state can be pushed from the server to connected clients
    • On a disconnect, the client knows it is disconnected and the last known event it had received
    • The client will attempt to reconnect to the server within three seconds of a known disconnect
  • Created a poor man's continuous integration environment using GitHub's action hooks
    • Once the code is working in the development environment, a simple git push will relay a message to the production environment
    • The production environment will run a shell script to get the latest changes, run npm install, and restart itself
  • Configured nginx to properly route the client side javascript and assets and calls to the api to the node based Express server
    • It would be possible to take the client based assets and place them on a Content Delivery Network, but this is rather simple example, so having nginx serve the static content seems reasonable
    • Additionally, nginx is rerouting all requests from http to https and I have installed a valid SSL certificate on the server for secure communication
  • The client now allows any browser to establish a connection and receive a simple display
    • This will act as the foundation piece for subsequent sprints covering stories to add message recipients, configure message rules, maintain a holiday/vacation calendar, and provide alert acknowledgements
So if any or all of the above seems like technical jargon, I'm going to break it down...

Single Page Apps

Back when the whole Internet was brand new, there were a bunch of static websites made up of HTML text. A static website essentially means that the text of the HTML document did not change. This was OK for doing some simple marketing pages that were called brochureware, but to do anything more complicated, there was a need to make dynamic pages.

A dynamic page means that the content of the page will be established by certain parameters. If a user had a cookie, or the querystring had a parameter, etc. the page would display differently. For example, if I were making an e-commerce site and had a search box, the results displayed in the search results would depend on the user's input into the box. There would be no way I could know ahead of time all the permutations a user might enter, instead, the user's input would go to an application server. The application server would query a database, receive the results, and create HTML on the fly to send back to the user.

For the last twenty plus years, most applications were built on a three tier architecture consisting of a client (or browser), an application server, and a database. From 1993 until the last six or so years, creating a dynamic page looked like this:


1. The browser makes a request to an application server. The request header may contain an authentication or bearer token which establishes the user's identity or not.

2. The application receives the request. If there is an authentication token, the server can then do a lookup on the user who is making the request and make sure the user is authorized to do so. If information from the database is required, the server issues the database query.

3. The database receives the query from the server, executes the query, and then...

4. The database sends the query results back to the server.

5. The server takes the results of the query, packages together all of the javascript libraries and css sheets necessary to display the information, and dynamically creates HTML that is then sent back to the client.

6. The client receives the HTML and renders the page.

All of this was OK, but it was inefficient. Web pages were kind of, well blah. Content got really stale very quickly and looking at a web page was a lot different than looking at an executable program. Along came some JavaScript libraries and things started changing.


Here is the flow for a Single Page App:

1. Instead of calling an application server, the browser receives an index.html, an app.js, fonts, and stylesheets from a Content Delivery Network. A Content Delivery Network is specifically designed to cache static assets and serve them to requesters with a minimal number of hops and as geographically close to the requester as possible. The Content Delivery Network is optimized in a manner to provide the response as quickly as possible and avoids routing the requests to a server that has more important work to do.

2. The browser receives the static content and then makes a call to the application server.

3. The request to the application server may have authentication and instructions to receive dynamic content for a home page.

4. The server processes the request and calls the database (as before).

5. The database receives the query from the server (as before).

6. The database sends back the results (as before).

7. The server packages up the query as a JavaScript Object Notation Object (JSON). Now, instead of taking the query results and creating some HTML on the fly, the server is going to send back a JSON.

8. The browser receives the JSON from the server and figures out where to put the data.

9. Subsequent interactions with the browser may or may not result in steps 3 - 8 occurring. Under the first diagram, if we had a table with columns and wanted the data sorted by a particular column, we would have clicked the column and that would have kicked off a call to the server, the database would be queried (again), and HTML would be recreated to show the table with the new sort order. Additionally, the client would receive the same JavaScript libraries and stylesheets and the screen would refresh. With a Single Page App, all of this would be avoided. The work of changing the display would actually happen on the client. As a result, our server sees less requests improving our scalability and the user gets an almost instantaneous response. Win-win! If the client needs more or different information from the server, it can repeat steps 3-8, but from here on out; the client and server are only exchanging for lightweight JSON object payloads. Instead of downloading kilobytes of data on every request, the client and server are exchanging literally bytes of data.

React, Flux, and Server Sent Events

By basing the client portion on a Single Page App architecture; hopefully I have demonstrated how scalability can be improved, response time is better, and the overall experience is enhanced. I chose one of the newer JavaScript libraries, React, as I think it fits well with my overall design principles. React was open-sourced by FaceBook in 2013 and has created a well deserved following. It is more or less competing with Angular to be the dominant JavaScript library. I am not going to compare and contrast the two, as they are both well supported and widely used. Either one is a decent choice.

React is a vast improvement over some of the power that jQuery provides. With jQuery, a developer can make a dynamic change to a certain part of the Document Object Model (DOM) without redrawing the whole page. React takes this principle and creates a Virtual DOM. When a page is created in React, there are certain variables that we expect to change by manipulating its state. Any time a setState method is called, the Virtual DOM is modified and React compares the actual DOM to the virtual one and automatically refreshes the changed components. In my demo, you can see the timer changing every second. This is done rather simply by modifying the state of my component. The only part of the screen that changes is the portion where the counter is. As the seconds tick, nothing changes but the time. Cool!

Flux is a pattern recommended by FaceBook for developing React applications. Components get their data from stores. Stores are modified via actions. When a store changes its state, any component that is listening receives the update and the data is redrawn. In my demo, in the store that holds the data for Zoe's glucose, I have connected an EventSource object. This EventSource object holds open a text/stream between the client and the server. On the server side, I have a collection of responses and add or remove to this collection as clients connect and disconnect. Since the server is written in Node and Node is asynchronous, this connection does not hamper performance. For the most part, both client and server have an open connection with nothing coming through. However, when an event happens, I can get every client with a connection to automatically update themselves by writing to this connection. The user never needs to refresh the browser to see new information, the EventSource will attempt to reconnect itself every three seconds in the case of a disconnect, and the client and server are always in synch. Refreshing the browser continually for new data is so 2012.

Continuous Integration and Environments

One of the very worst jobs I have ever had was doing the migration from SharePoint 2007 to SharePoint 2013 on behalf of Australian retail giant JB Hi-Fi. It was the inspiration for my most widely read blog post to date entitled SharePoint is a Colossal Piece of Shit and Should Not Be Used By Anyone. The project was a cluster from the get-go. The IT department was run by Geoff at JB Hi-Fi. Geoff had no idea about how to run an IT department, but he knew how to bust a vendor's balls. Somehow, he let a no name "consulting" company provide a revolving door of developers to create their internal applications. Everything was done fixed fee for next to nothing, giving the developers every incentive to cut corners. They initially loved the first few developers, but got increasingly unhappy with each developer thereafter. Geoff, having no real experience in IT, didn't seem to grasp that maintaining someone else's code is actually harder than writing it from scratch. At some point, the entire thing became such a garbled mess that he decided to "upgrade" from 2007 to 2013 because that would magically fix all the memory leaks and other sorted issues.

By the time I got on the scene, half the upgrade budget had already been spent with nothing to show for it. I got handed a desk and told to go to it, but it took some time to figure out what "it" actually was. Apparently, their method of source control was to stick everything in a folder called "SoPac Solutions (do not delete)" (SoPac is now defunct having filed for bankruptcy). Within this folder was a series of random C# command shell programs that were scheduled in batches. I spent a good two months manually moving and testing these programs to the new 2013 environment while simultaneously being bombarded by support issues for the 2007 version. Except, there was no test environment. Bugs were reported in production and I was expected to fix them in production. My only mechanism to fix the bug was then to attach a debugger to a production system and spend hours following the code until I could figure out what needed to be changed. This is perhaps the worst way to go about doing things.

The Right Way To Do It (tm) is to have a series of independent environments under source control. For the most part, any bug is going to be caused by either data or code. If a production bug is found, the production copy of the code should be copied into a new system, a backup of the data should be attached to the copy, and voila! a mechanism for debugging and not taking down the production system is in place. Further, it is pretty normal to be working on a feature and have it working, but it causes problems someplace else. The Right Way To Do It (tm) is for each developer to have their own environment independent of everyone else. The developer writes their code and unit tests. When satisfied, the code is then moved from their local environment to a test environment. If you feel like being fancy, the test environment can kick off some integration tests and report if the new migration was successful or not. Each developer works independently only to have it merge into a test or system integration environment. From there, it should be thoroughly tested before going to a User Acceptance Testing (UAT) environment. Once the new code has been verified by QA and the subject matter experts, then and only then, should it be pushed to production.

SharePoint makes this just about impossible, but here writing a simple Node and React based application, I am able to put my source code in Git, publish it to GitHub, and I enabled an active hook from GitHub. When I do a commit to either the client code or the server code, GitHub issues an encrypted post to my server. If it passes the validity test, it runs a shell script to update the npm packages, for the client it does a grunt build, and for the server it restarts forever. I now have the ability to work away on a new feature on my trusty Macbook Pro and migrate it with a simple command line. Some cloud providers have some really cool deployment tools, but my home grown version is running on a $5/ month server and does what I need it to. The process of checking in code and having the new environment receive the changes and update itself is called continuous integration.

NGINX

In my local environment, I run my application server listening to port 3000. To hit the API, I can issue a curl to http://localhost:3000. This is all fine and good, except when I am running in production, I want my service to be bound to https://www.thezlotnicks.com/api. Further, I want any browser issuing a get to http://www.thezlotnicks.com to be forwarded to https://www.thezlotnicks.com and download the index.html and app.js that result from my grunt build. Fortunately, there is a free open source commonly used product called Nginx (pronounced Engine X) that acts as a reverse proxy. It can route the calls to /api to the node server and serve up my static content as well with a small amount of customization. If you are curious, the following gist has the config file for Nginx. It took me a little while to figure out why my event streams kept closing in production, but with the addition of line 35, it solved the problem...

https://gist.github.com/PokerGuy/5b3a26f67e6f2e54f7faabf2f4796ea8

And with that, everything is in place for a working server providing polling to Dexcom, push notifications to any connected client, continuous integration, and an API. I also created a repository for the client code which can receive notifications from the server, written as a single page app, and never requires a browser refresh. The next sprint will cover connecting to the last tier in the three tier architecture and then finally sending out some conditional alerts and acknowledgements. Happy coding!

Server Code:
https://github.com/PokerGuy/dexcom-share-messenger

Client Code:
https://github.com/PokerGuy/dexcom-share-client