In Part 1, I reviewed the gory details of the foundation required to run synthetic browser testing at scale. Now that we have a framework for building out our tests, we can move forward with wrapping our test runner in a web application so that the metrics we care about can be gathered and viewed through a decent UI. For this example, I’m going to use the Alexa top 12 news sites and execute a test over each one with the latest Chrome web browser.
Docker Swarm and Selenium Both Docker and Selenium are pretty much household names these days in the world of software engineering. I’ve been fascinated with Docker since its inception and have been using it for side projects and in my day job for a few years now. I recently came across the need to test a Chrome extension and load a web page while that extension is installed. This test would load the page, wait for it to load, check some JS variables and APIs and then spit out a screenshot and any needed metrics.
As a creator and overseer of an Open Source project, we must make tough decisions on when to pull the plug and move on. In 2012, I started a web based, real-time feedback project called Onslyde. To me the project was a huge success with many measurable results: Massive amount of research with WebSockets and mobile devices. Based on the code and research I was able to craft many talks for speaking engagements at conferences around the globe.
Today, many companies are synthetically measuring web performance with various scripts and services. Now that everyone is able to measure those performance numbers and visualize the problem areas, it’s time to raise the bar in terms of scalability, portability and the use of newer DOM APIs. The following charts show a snapshot of data collected over the period of one year (2012-2013) from the CNN.com home page using Loadreport.js. Loadreport data from 2012-2013 for CNN.com I started the Loadreport project while working on the CNN homepage in 2012.
Overview I’ve been working on an open source project called Onslyde for almost 2 years. If you want to know the details behind it you can read articles here, here or watch a recent interview. This year, at Devnexus 2014, I wanted to take Onslyde a bit further by offering a way for sponsors to ask questions throughout the day between sessions. Since this was a trial/experiment I went old school and didn’t create a web interface for reserving sponsored slots.
Overview At the beginning of 2013 I was given the incredible opportunity to start with an empty canvas and come up with a completely new web application for Apigee. For the past year I’ve been heads down on merging Apigee’s Usergrid and Mobile Analytics products using AngularJS. For those interested: Usergrid, a Backend as a Service, was acquired by Apigee in early 2012 and has served as the core tool of all Apigee trainings and developer outreach efforts.
Overview Some of the best known approaches for running a countdown or count-up timer in AngularJS are shown on JSFiddle using setInterval and Angular’s builtin $timeout. Both approaches require the use of $scope.$apply, which is completely normal. It forces the page/bindings to update when a change is made outside of the AngularJS lifecycle (like inside a setInterval or setTimeout). If you want to read more about $scope.$apply check out this article.
Overview If you use a static website generator, then you may be aware of the pain that goes into getting everything automated and pushed out to github pages on each commit. The manual workflow goes something like this: code your site using asciidoc/markdown/haml/sass/less/etc preprocessor (or build) generates static site (locally on your machine) copy static site to your local gh-pages or username.github.com repo/branch git push new site done Now, with a little scripting we can have: code your site using asciidoc/markdown/haml/sass/less/etc git push to source repo done (with so many other cool features at our fingertips) Most preprocessor tools do have some kind of built in function for this workflow, but when you need to take it to a finer grained level and leverage services on the CI server, then this is what must be done.