As experts in a fast-moving industry, we're always learning new things to share.
Here's what's on our minds.

Inspecting Web Traffic with Burp Suite Proxy

By Andy Kofod

Burp Suite is an awesome tool for security testing. Following up on our previous article, this article looks at how to inspect the intercepted web traffic in Burp.

Pair Programming Patterns

By Bob Fornal

Developers want to get more done. Less experienced developers need to learn from the more experienced developers. Most people think of two developers sitting at the same machine, sharing a keyboard. With the increasing popularity of remote work, it is now possible to pair program while thousands of miles apart.

Check out the article on !

Getting Started With Burp Suite

By Andy Kofod

If you're interested in information security, an HTTP proxy is essential. Whether you're doing web penetration testing, bug bounty hunting, or just want to try some CTFs, Burp Suite is an excellent tool to have in your arsenal. Click here to learn how to get Burp Suite set up in your test environment.

Types of Security Assessments and Which One is Right for Your Organization

By Andy Kofod

There are numerous types of security assesments, and knowing which one is right for your organization is an important step in improving your security posture. The terminology can be confusing, and the names are often used interchangably. This article discusses three different types of assesments, what they are, and when it's appropriate to use each. Click here to read more...

Getting started with Machine Learning

By Dan Wypiszynski

Artificial Intelligence, Machine Learning and Deep Learning are some of the hottest topics right now and have been experiencing an explosion in popularity and usage. Click here to learn more about Machine Learning and the resources you can use to become an expert!

Docker Build Best Practices

By Ed LeGault

As more and more applications are being containerized it is important to point out some best practices to optimize and better support the building of those images.  Click Here to read more...

Picking out the parts for a custom Machine Learning Box

By Dan Wypiszynski

Building your own Machine Learning Box can be fun and save a lot of money over cloud solutions.  Click here to learn how to pick out the best parts for your Machine Learning Box!

Installing an OS and Software for Machine Learning

By Dan Wypiszynski

Setting up a new computer for Machine Learning can be complicated.  Click here to learn how to install an OS and all the packages you'll need to accomplish your Machine Learning goals!

Development Environment Considerations for Containerized Applications

By Ed LeGault

What are the things that should be considered for the developer experience when building and testing a containerized application?  Why is this important?  What are the challenges?  The answers to these questions and more can be found in the full article here.

Preparing for the AWS Solution Architect Professional Certification

By Andrew May

I have a deadline - in late May my AWS Solution Architect Associate certification will expire. Rather than simply renewing it I'd like to "level-up" to the Professional certification.

I'm creating a series of articles on about the process:

Stop Creating a Repo For Your QA Automated Tests

By Dennis Whalen

Are you keeping your QA automation tests in a repository seperate from the application code?  That might not be a great idea!  Check out the latest blog post to find out why.

Automated API Testing with Karate

By Dennis Whalen

When it comes to testing API endpoints, a number of tools and frameworks exist to support automated testing. A relatively new player in the area is the open-source tool Karate. 

Check out the latest blog post for more info!

Better Code Review Practices

By Bob Fornal

When done wrong, a CODE REVIEW can assist in finding bugs, come to nothing, or harm the interpersonal relationships of a team. Therefore, it is important to pay attention to the human aspects of code reviews. To be most successful, code reviews require a certain mindset and phrasing.

Check out the article on

Movin’ On Dublin’s Bridge Park

By Erica Krumlauf

Whether people admit it or not, design and location of your office has a huge impact on team culture. Ever worked in a grey sea of cubicles? Doesn’t exactly inspire high energy work. At Leading EDJE, we want our office space to be the opposite of drab. If our mantra of “Real. Fun. Geeks.” tells you anything, you know we are all about working hard and having fun doing it. Our culture and core values are at the heart of everything we do, and it’s important we have a physical space that matches the energy and passion our team brings every day.

That's why we're thrilled to announce our office is moving in early 2020 to Dublin’s Bridge Park. If you’re unfamiliar with the area, think all the amenities of a cool, hip urban space – restaurants, bars, shopping, local businesses – tucked in one of Columbus’ fastest-growing suburbs. Leading EDJE is proud to call Dublin home. So when our lease came up in our current office space, we jumped at the chance to stay in Dublin while relocating our office to a place that fits perfectly with our team’s energy and culture.

Being in Bridge Park gives us a space to do our best work, and offer our team a really awesome office space with amenities they can take advantage of professionally and personally.

The City of Dublin has a fast-growing sector of technology companies, and we’re proud to be part of the movement to make Dublin a leading technology hub of the Midwest. Being in Bridge Park will be one more way we can attract and retain top technology talent.

There will be lots happening over the next couple months as we prepare to move. We hope you’ll follow along on our Facebook page to see what’s new in our Bridge Park space.

Take 12 Weeks to Help Your IT Team Work Smarter, Not Harder

By Wendy Ivany

Here’s a question: does it seem like your IT team barely has time to blink?

We’ve all been there. Technology is such a fast-paced industry, it can sometimes take everything you’ve got just to keep up.

But let’s take a step back and think about all the elements that go into running a successful IT team. From talent to tools to processes, there are a lot of parts to keep track of and make sure are running at their highest potential.

Because running a successful IT team means making sure each element is optimized and working together in the most efficient and effective way.

Kind of like how brewing the perfect beer requires just the right ingredients, brewing process and technique. Or how the Ryder Cup brings together the very best in the game of golf, to create a dream team, optimized for competition.

When you’re in the weeds of the work, it can be hard to take a big picture look. But I’m here to tell you, investing the time to take a step back before moving forward to ensure you’re working smarter, not harder will pay off dividends in the long term.

Leading EDJE has been proud to partner over the years with IT departments from some of the nation’s top companies to help them and uncover the big and small steps teams need to take to bring out the best in everyone.

That’s why we’re so excited to introduce EDJE Optimization – our proven 12-week process to optimize the success of IT teams.


Through taking a look at six critical segments of technology, we come alongside IT teams to take a look at what’s working and what’s not, and develop a strategic plan for success grounded in research and best practices.

Is your team ready to take the next big step? Let's talk about EDJE Optimization and how together, we can help your team perform at its highest level. You can find out more at , or drop us a note at

Your First Automated Mobile Test

By Dennis Whalen

Automated testing for web applications and mobile applications have many similarities, but the switch to mobile apps requires familiarity with some new tools.  In addition to dealing with a new application platform, a typical enterprise mobile application will run on both iOS and Android, which introduces even more complexities.

Appium is an open-source cross platform automation library for automating both iOS and Android apps.  In this article I’ll walk through some of the architectural basics of Appium, look at a tool to assist with mobile element locators, and finally build a working automated test.

 Mobile Testing Overview

So, what is Appium?  Appium is a node web server that exposes a REST API.  Appium accepts automation requests from your automation test and runs them on the requested mobile device or simulator.

The diagram below depicts the key components of an automated test that leverages Appium:

Test automation script

The test automation process starts with the automation script.  Since the script will ultimately talk to Appium via a REST API, any test framework and programming language can be used.  If you want to use one of the existing Appium client libraries (and you do), you can build your tests in most of the popular programming languages.

Appium client library

Existing Appium client libraries have language bindings for Ruby, Python, Java, JavaScript, PHP. C#, and RobotFramework.  Your automation scripts will leverage these libraries to establish a connection with Appium and send automation requests.

Appium web server

Appium is a web server that exposes a REST API.  The basis of the API is the Selenium Webdriver API, with extensions added to address the unique needs of mobile apps.  A test session is started with Appium by posting a JSON object called Desired Capabilities.  This object defines the key components of your requested test session.

Automation framework

Appium leverages vendor provided automation frameworks to implement the client automation.  That means that the UI automation tools provided by Apple and Google for UI automation are the same tools that are used by Appium.


If you have never setup your machine for mobile testing with Appium, this will likely be the most time-consuming part of this walkthrough.  For our example I will only be focused on Android automation, but since we’re using Appium, the steps are similar for iOS.

To get things ready for Android automation, you need to install Java, Android SDK (Android Studio), Node.js, and Appium Desktop Server.  Follow the Appium install instructions for details.

Once everything is installed, you’re ready run appium-doctor to make sure it’s installed correctly.

appium-doctor --android

 Hopefully you’ll see lots of green checkmarks like this:

Our Application

We’re getting closer to building a test, so you might be thinking, what app are we testing?  For this walkthrough we are going to do some automation with an existing Android demo app, and our test will download and install the app on our Android emulator.

The API Demos app is available via the Appium website and can be used to demonstrate a number of Android controls.   For this exercise we are just going to automate a few taps. In our sample app, the main page is a tappable list of control types:

Locating Elements with Appium Desktop

Now that we have an app, it’s time to start building our test script.  To do that we’re going to use Appium inspector to identify the element we want to interact with. If you didn’t install Appium Desktop in the previous steps, go to and click the big blue button to “Download Appium”.  From there you need to:

  • start the Appium Desktop app
  • click the Start Server button to start Appium
  • start an Inspector session but clicking the magnifying glass icon near the top right corner

 You should see something like this: 

This tool is going to let us connect to our app and find some element locators for the controls we want to interact with. 

Desired Capabilities

Desired Capabilities is a json object that will allow us the start an Appium session and interact with our app.  Our example is fairly basic, and will include the platform name, automation framework name, a device name, and a URL to download the app.  Appium will start a session based on these requirements and install the app onto the device via the provided URL.

To see this in action, copy the following into the JSON Representation text area of Appium Inspector, and click the save icon in the upper right.

  "platformName": "Android",
  "automationName": "UiAutomator2",
  "deviceName": "My Android Device",
  "app": ""

 Clicking the save icon will populate the Desired Capabilities textboxes to the left of the "JSON Representation" textbox like this:

 And finally, before starting the Appium session we need to start an Android emulator via Android Studio.

 With the emulator started, it’s time to start the Appium session.  Click the blue “Start Session” button at the bottom. Appium will download the app, install it on your emulator and display something like this:

 If you click on the “Views” list item in the left panel, the Selected Element panel will be populated with element info for the selected element.  For the Views element we see the accessibility id for the element is “Views”.  

 We can see other access mechanisms such as xpath, element id, class, etc.  The preferred mechanism for locating an element is accessibility id, so we’re going to use that.

 To locate this element in our JavaScript test script we use something like:

let viewsElement = await driver.elementByAccessibilityId("Views");

 For our test script we are going to click the Views item, then the Buttons item, and finally the Off/On toggle button.  We can find accessibility IDs of all these elements by using the Appium Inspector.

Creating the Test

Finally, we are going to create a working test script using what we’ve learned.  This is a simplified script. For example, a truly robust test would include acceptance criteria.  This acceptance criteria is typically implemented via an assert, which allows us to define expected results and the actions to take when those expaction are not met, such as logging and screen captures.  We’re going to skip details like this for now.

 In the code below you can see we start an Appium session using the same Desired Capabilities info as earlier, and we tap through our app by referencing accessibility ID’s we identified using Appium Inspector.  

 1. Go ahead and create a file named test.js and paste this content:

const wd = require('wd');
const driver = wd.promiseChainRemote("http://localhost:4723/wd/hub");
const caps = {
    "platformName": "Android",
    "automationName": "UiAutomator2",
    "deviceName": "My Android Device",
    "app": ""

async function main() {
    await driver.init(caps);
    let viewsElement = await driver.elementByAccessibilityId("Views");
    let buttonsElement = await driver.elementByAccessibilityId("Buttons");
    let onOffToggleElement = await driver.elementByAccessibilityId("Toggle");
    await driver.quit();



2. Install wd, a NodeJS client for WebDriver/Selenium

npm install wd

3. Start your Android emulator

4. Start appium with Appium Desktop

5. Run your test

node test.js

 If all goes according to plan when you run this test, your app will download to your emulator and you’ll see the test automation tapping the Android control.  


The concepts are similar to web automation and getting this basic script working will get you more comfortable with leveraging your web automation knowledge in the mobile app testing arena.

Google Cloud Next for an AWS User

By Andrew May

I recently attended the Google Cloud Next ‘19 conference in San Francisco, thanks to my generous Leading EDJE training budget. I went because I wanted to learn more about Google Cloud Platform as most of my cloud experience has been with AWS. I’ve been to Amazon’s re:Invent conference a couple of times and thought it would be interesting to compare the conferences and the platforms.

Conference Experience

There’s no doubt that part of the conference experience is the city where the conference is located, and walking through the Las Vegas casinos (many times) is an integral part of re:Invent. As re:Invent expands to more locations getting around has become a real problem. Google Cloud Next took place at the Moscone center in downtown San Francisco with a few other venues close by, making the conference seem smaller and more compact, even with nearly 30,000 attendees.

While Google Cloud Next is a shorter conference (3 days instead of 5), the format of both conferences is similar with a combination of keynotes, sessions and workshops, with a giant expo from vendors.

My biggest disappointment with Google Cloud Next was the lackluster keynotes. The main reason (for me at least) to attend keynotes is the expectation that there will be big announcements. Two of the three keynotes did contain announcements (the developer keynote did not), but they were lacking in build up and technical details, and I would have been better informed by reading the Google Cloud blog from my hotel room. There was none of the excitement and technical insight that the re:Invent keynotes contain.

I did find the sessions I attended very informative, especially with my limited GCP experience. I selected a few introductory sessions to give a grounding in the services available, but largely stuck with higher level technical sessions.

The expo area was open all three days most of the day, unlike re:Invent where it seemed to be closed more often than not. As well as vendors, there were displays from Google, and a large labs area where I did some late night Qwiklab labs and took part in the “Cloud Hero” competition.


From Google’s perspective I think that Anthos was the biggest announcement, but with pricing starting at $10,000/month it’s a little hard for me to get excited about. The idea of a consistent control plane and APIs for Kubernetes regardless of where it’s hosted is interesting and may eventually be transformative, but the announcement and documentation are currently light on technical details, especially around how they plan to support other cloud platforms. I also listened to a podcast about the Anthos Migrate service that will transform VMs into containers and all I got out of it is that it uses “streaming” to migrate.

I was much more excited by Cloud Run that provides serverless deployment of containers based upon the open source Knative runtime. I’m pretty sure this didn’t get announced at the keynote, but the sessions afterwards expected us to be aware of it. The big differences between Cloud Run and other function based serverless cloud functions (e.g. Google Cloud Functions and AWS Lambda) are the ability to provide a full container, and the concurrency model that allows for multiple requests to the same instance. The demonstrations showed fast startups (which I’ve confirmed myself) and rapid scaling under load. Scaling is based upon Knative’s throughput model rather than the less useful CPU based scaling typically used in Kubernetes.

The announcement of managed Microsoft SQL Server and Active Directory seems to indicate that Google has become more serious about attracting enterprises to their cloud platform.

Google also announced an interesting partnership model with a number of open source providers including Elastic, MongoDB and Redis Labs. You’ll be able to provision services from these partners from within GCP with a consistent interface, unified billing and initial support from Google. This compares to the AWS model of taking and hosting open source projects and sometimes forking them (e.g. ElasticSearch).

There were a lot of other announcements, and I encourage you to look through the full list.


So based upon all this new information I’ve absorbed, what are my impressions of the Google Cloud Platform?

  • More of a focus on global and multi-region services than AWS - for example global load balancers and VPCs, but this seems to address the compute side of things more than data.
  • The strongest Kubernetes implementation (unsurprisingly) and a range of services that build upon this ecosystem.
  • The Cloud Shell (which Azure also has, but AWS lacks) is very useful.
  • Managed SQL Server and Active Directory shows a new focus on enterprise migrations that was perhaps lacking before.
  • Interesting machine learning integrations into other services for data analysis.
  • A much smaller selection of services than AWS, but perhaps enough for most projects.

Without actually using GCP for a real project I reserve the right to change my mind, but I’d be interested to work on a GCP project.

Making the Case for Software Testing

By Tom Hartz

I was recently talking with a skeptic of software testing.  In his view, writing code to test other code seems dubious. Why would you want to create and maintain a bunch of extra code to prove your original code does what it should?  His argument was that testing was unnecessary and not worth the effort (you’re going to manually test it anyway).

I can see his point of view, even if I don’t agree with his sentiment.  I think he was coming from a place of bad habit, having never written any tests for any of the application code he’s ever built.  From that point of view, testing can seem like a daunting task, requiring a ton of extra effort for no tangible benefit.

There are many kinds of software tests you can write, and all of them provide benefits.  The longer the life expectancy of an application, the more benefit tests provide. You may not be the sole programmer on a project for its lifetime, and having specified tests is actually an effective form of documentation.

Unit Testing involves writing code to call individual functions.  While application logic may contain needed complexity to solve business problems, the tests should be pure and simple.  By following the pattern of AAA (Arrange, Act, Assert, a.k.a Given/When/Then) unit tests are clear and easy to follow. Unit Tests validate that business logic executes correctly and returns expected outputs under different scenarios of inputs.  Adding unit tests is somewhat tedious, but with practice they become so easy to write that there is no good excuse for skipping them. They provide proof that the code is well built and individual units of code work as expected.

Integration Tests are similar, but instead of mocking dependencies you inject the concrete components of your application and test how they interact.  This kind of testing also can validate that the software behaves in a reasonable amount of time, and has good performance in all cases.

UI Tests are usually harder to write and likely to break over time with application changes.  But they can replace or reduce the amount of manual regression testing you or your QA team has to perform.

I often hear from programmers who are working on personal projects, that they have 0 tests.  I think it is something people often tend to skip on personal projects because it feels like work, and developing the software functionality itself is more fun.  I think as we develop better tools for building software, testing will become even easier to the point where everyone’s personal projects will include tests. There are tools being developed like randoop to automatically generate unit tests!

I haven’t said anything about Test Driven Development (TDD) yet.  TDD is the practice of writing tests BEFORE you implement any code for any given new feature.  I am not a TDD zealot; I think it is mostly a useful practice for defect fixes. When you get a bug report and don’t yet know the cause, it is usually pretty easy to write a failing test at some level to reproduce the bug.  From there, you can step through and debug and find the right refactor to fix the defect and pass the test. This is an efficient way to work through the bug and provide coverage to prove the correct behavior from there on. However, in my experience doing green-field development, writing tests first is a struggle.  I prefer to at least design my components and interfaces from a high level before adding test cases.

Bottom line, I think a testing suite provides tremendous value to an application.  If I am joining a project I would always prefer that codebase to have some form of testing in place as a way for me to understand and follow the code.  It also gives me a sanity check that any changes I commit do not adversely affect other parts of the app (ideally as part of an automated build CI pipeline).  To non-programmers and stakeholders, test coverage gives confidence that the software meets the requirements, and is usable and stable. Software testing is absolutely a worthwhile endeavor!

CI & CD for iOS Apps

By Tom Hartz

Continuous Integration and Continuous Delivery have become pillars of modern development.  Reporting breaking changes back to your dev team quickly can improve both the quality and the speed at which you deliver software.  Likewise, automating the release pipeline for your deliverables cuts down on redundant labor and allows you to focus more time on meaningful work.  This all applies in the realm of mobile application development, but there are some special considerations to account for with the iOS platform.

Building iOS applications requires Mac hardware.  This means your development team will need MacBooks, but it also means to do CI/CD you will need a server running macOS.  Recently on a project, I worked through setting up a Mac Mini machine as a Bamboo build agent to perform CI/CD tasks for both mobile platforms (Android build plans can be executed on any type of agent: Windows/Linux/Mac).  In this article, I will share some of the issues I encountered as well as some of the concrete build plan steps.

Branch Detection

A standard CI setup is going to start with Branch Detection.  Every time new code is pushed to a feature branch, the unit tests are executed against the changes.  This part of your pipeline does not have to be restricted by platform, even if you are building a native iOS app and have a suite of XCTest cases in Swift.  XCTest cases supposedly can be made to run in Linux, although I have never personally tried it.  It is probably simpler and easier to set up this part on a Mac agent with Xcode anyway, and the other steps WILL require it.  Branch Detection build plan steps are as follows:

1. Source Code Checkout
2. Run Unit Tests
3. Check Code Coverage

Merging the feature branch into development/master should require a passing test suite and one or more code review approvals.

UI Testing

Typically after merging one or more feature branches, the next step for CI/CD is to create a build for automated UI tests and/or manual regression testing.  Automated UI testing for iOS has some significant challenges! Using a device simulator is easier to manage, but you cannot install any given .ipa onto it, it has to be compiled specifically for the simulator CPU architectures.

iOS Device CPU architectures:

  • arm64 is the current 64-bit ARM CPU architecture, as used since the iPhone 5S and later (6, 6S, SE and 7), the iPad Air, Air 2 and Pro, with the A7 and later chips.
  • armv7s (a.k.a. Swift, not to be confused with the language of the same name), being used in Apple's A6 and A6X chips on iPhone 5, iPhone 5C and iPad 4.
  • armv7, an older variation of the 32-bit ARM CPU, as used in the A5 and earlier.

iOS Simulator CPU architectures:

  • i386 (i.e. 32-bit Intel) is the only option on iOS 6.1 and below.
  • x86_64 (i.e. 64-bit Intel) is optionally available starting with iOS 7.0.

This unfortunately means you cannot test an app bundle using a simulator, and then promote the same “artifact” for release.  For iOS you have to do physical device testing, from which you can then promote an app bundle to production. Keep in mind that automated testing on tethered physical devices will require more overhead to maintain and keep the CI pipeline running.


Producing an .ipa file from an automated build can be done in two steps:

1. Create an archive and sign

security unlock-keychain -p $PASSWORD login.keychain

xcodebuild -workspace 'MyProject.xcworkspace' -scheme 'MyProject' -configuration Debug -archivePath ./archive/'MyProject.xcarchive' clean archive -UseModernBuildSystem=NO DEVELOPMENT_TEAM=########## CODE_SIGN_IDENTITY="iPhone Developer: Tom Hartz (##########)" PROVISIONING_PROFILE='MyProject development profile' OTHER_CODE_SIGN_FLAGS='$HOME/Library/Keychains/login.keychain-db' CODE_SIGN_STYLE=Manual

2. Export the archive into an app bundle (.ipa)

xcodebuild -exportArchive -archivePath ./archive/MyProject.xcarchive' -exportPath 'archive/ipa' -exportOptionsPlist Development.plist


<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "">
<plist version="1.0">
        <string>MyProject development profile</string>

After creating an app bundle, your build plan can save it as an artifact in Bamboo, and also upload it to TestFlight, HockeyApp/App Center, etc. for internal deployment and testing.

Artifact Promotion & Release to Prod

After regression testing has been signed off by your QA team, you will want the final step of your CD pipeline to pick up this artifact and release it to production.  A release plan in Bamboo would be as follows:

1. Download Artifact
2. Modify for release
   a. Unzip .ipa bundle
   b. Replace dev provisioning profile with production
   c. Delete existing code signature
   d. Perform any custom app configuration (set Prod API url, etc.)
   e. Resign bundle using production entitlements
   f. Re-Zip bundle
3. Upload to Apple (App Store Connect)

Here is a shell script example I wrote that performs all of step 2:

echo "begin iOS release build..."

mkdir ipa

cp MyProject.ipa ipa/

echo "copying provisioning profile..."

cp $HOME/Downloads/MyProject_Distribution_Provisioning_Profile.mobileprovision ./ipa

echo "unzipping ipa..."

cd ipa


echo "unlocking keychain..."

security unlock-keychain -p $PASSWORD login.keychain-db

echo "replacing provisioning profile..."

cp "MyProject_Distribution_Provisioning_Profile.mobileprovision" "Payload/"

echo "removing existing code signature..."

rm -rf Payload/

echo "signing ipa..."

codesign  --entitlements Entitlements.xml -f -s "iPhone Distribution: Leading EDJE LLC (##########)" Payload/

echo "zipping app bundle..."

zip -qr MyProject.ipa Payload/

echo "iOS build done!"



<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "">
<plist version="1.0">

DIY vs. Cloud CI

My experience and approach in this article is focused on an in-house custom solution for CI/CD in Bamboo, but there are third party services available which can provide some of these build actions for you.  You may be able to subscribe to one or more of these services to avoid having to buy Mac hardware to run your builds and test suite:

AWS Device Farm
Buddy build
Sauce Labs

DISCLAIMER:  I have not tried any of these services, so your mileage with them may vary.  While they may simplify things somewhat, I expect you will still need to configure them quite a bit with custom build steps and options, depending on the specifics of your iOS project.

W. Y. T. I. W. Y G. - What you test is what you get

By Ed LeGault

If you were a web developer in the late 90s early 2000s you probably used or had access to a WYSIWYG editor of some sort.  The acronym stands for “What you see is what you get”. These types of editors allowed you to change HTML code and then see what the web page would look like.  Then you could do things like drag and drop images, change alignments and formatting to see what HTML code would be generated by the changes. The point being that what you would see as you edited the page what you would get when the HTML code was rendered by a browser.

Skip ahead a bunch of years to the new world of containerization.  Building images and deploying applications as containers provide the ability to package code along with the runtime libraries and configuration required to run the application.  Containers give you comfort in knowing “What You Test Is What You Get”. However, some steps need to be taken to gain this level of consistency. When a image is built it is typically pushed to a repository and versioned with a particular tag signifying that it is a release candidate.  As a general rule images should only be built when changes are made in the repository. This includes images that are built from integration or feature branches. That image is then deployed as a container to be tested. Once testing is complete that image can be re-tagged as an official release and be deployed to production.  The image is not re-built but re-tagged. This means that when that same image is deployed as a container in production it is the exact same image that was tested. A goal in any CI/CD process should be to make sure artifacts are not re-built or changed between deployments as they travel from testing phases to release phases. Instead, once tested, all artifacts should be re-tagged (if needed) and then deployed to production.  This provides reliability and consistency of production deployments. Combined with deploying the infrastructure configuration as code, a production release is no longer a scary undertaking.

Another key advantage to being able to rely on consistency between test and production environments is the confidence in fixing defects and knowing that they should not reoccur.  Once a defect is found in production it should be replicated in test via automated testing. The defect is then known to be fixed when the test is passing in the test environment.  As long as the same artifacts are deployed to production you can be confident that the defect is resolved. You can also be confident that the automated test is covering the defect and will never return.

A consistent and reliable CI/CD process is a key DevOps principle that will lead to saved time and money because time isn’t wasted chasing down problems during the release process.  Containers provide a way to achieve that consistency because what you test is what you get (W.Y.T.I.W.Y.G).

Pulling Strings with Puppeteer

By Dennis Whalen

On a recent QA automation assignment my team needed to quickly build and deploy some basic UI smoke tests for an enterprise web application.  After some discussion we decided to go with Puppeteer. This is my first exposure to Puppeteer and I want to share a little of what I’ve learned so far. 

So what is Puppeteer?  Puppeteer is an open source Node library that provides a high-level API which allows an automation developer to drive the browser via the Dev Tool Protocol.

The first step to exploring the features of Puppeteer is to get it installed, so let’s get started!  

Puppeteer setup

npm i puppeteer

 And there you go!  Once you successfully install puppeteer you have also downloaded the version of Chromium that is guaranteed to work with the installed Puppeteer APIs.  

If you don’t want the overhead of that download and want to test with an existing install of Chrome, you can install puppeteer-core instead.  Just be sure the browser version you plan to connect to is compatible with the version of Puppeteer you’re installing, which is found in the Puppeteer package.json file.

Taking a screenshot

We’re now ready to to create our first test, and we’ll start with something basic.  For this test we’ll open the browser, navigate to the Leading EDJE home page, save a screenshot of the page, and close the browser.

Create a new folder for your tests, and then create new file named screenshot.js:

const puppeteer = require('puppeteer');

(async () => {
 const browser = await puppeteer.launch();
 const page = await browser.newPage();
 await page.setViewport({ width: 1680, height: 1050 })
 await page.goto('', {waitUntil: 'networkidle2'});
 await page.screenshot({path: 'le-screenshot.png'});
 await page.pdf({path: 'le-screenshot.pdf'});

 await browser.close();

If you’re familiar with other UI automation frameworks, this all probably looks familiar.  We open the browser, override the default resolution of 800x600, navigate to the page, capture the screenshot, then close the browser.  We are also taking a screenshot in both PNG and PDF format, with just 2 lines of code.

That’s the code, so now let’s run it!

node screenshot.js 

If this runs successfully, you should see no errors on the command line, and new files created named le-screenshot.png and le-screenshot.pdf.  Open the PDF file and notice the full page is captured.

What you won’t see is the browser opening.  That’s because by default Puppeteer run headless, which is necessary when running as an automated CI process.  If you want to see the browser in action, simply set the headless option when launching the browser:

const browser = await puppeteer.launch({headless: false}); 

Google search automation

Let’s create another test and name it google.js:

const puppeteer = require('puppeteer');
const { expect } = require('chai');

// puppeteer options
const opts = {
 headless: false,
 slowMo: 100,
 timeout: 10000

(async () => {
 const browser = await puppeteer.launch(opts);
 const page = await browser.newPage();
 await page.setViewport({ width: 1680, height: 1050 })
 await page.goto('', {waitUntil: 'networkidle2'});
 await console.log('search page loaded');

 const searchTextbox = await page.waitFor('input[name=q]');
 await searchTextbox.type('meeseek');

 const [response] = await Promise.all([
   page.once('load', () => console.log('meeseek results page loaded'))

 expect(await page.title()).to.contain('Google Search');

 await page.screenshot({path: 'meeseek.png'});

 await browser.close();

With this test we are navigating to, performing a search, waiting for the results, and validating the results page title.    

In addition, we are slowing the test by 100ms for each operation by using the sloMo option when launching the browser.  This can be useful if you have a fast running test and want to be sure to see all the browser interactions.

We’ve also set the timeout to 10000ms.  Any test that test longer than 10 seconds will fail. 

Performance Tracing

For our last example we’re going to step away from basic UI automation and use Puppeteer to capture performance trace information.  

The Performance tab in Chrome dev tools allows you to records critical browser performance metrics as you navigate through your website.  With these metrics you can troubleshoot performance issues by analyzing what Chrome is doing under the hood to render your site.

We are going to modify our Google example a bit to automatically capture a trace file during the automated test.  From there we can load that trace file into Chrome dev tools and see what’s really happening during our test.

Create a new file names trace.js:

const puppeteer = require('puppeteer');

// puppeteer options
const opts = {
 headless: false

(async () => {
 const browser = await puppeteer.launch(opts);
 const page = await browser.newPage();
 await page.setViewport({ width: 1680, height: 1050 })
 await page.tracing.start({path: 'trace.json',screenshots:true});
 for (i = 0; i < 10; i++) {
   await page.goto('', {waitUntil: 'networkidle2'});

   await console.log('search page loaded');
   const searchTextbox = await page.$('input[type=text]');
   await searchTextbox.type('meeseek box');

   await Promise.all([
     page.once('load', () => console.log('meeseek results page loaded'))

   await page.screenshot({path: 'meeseek.png'});

 await page.tracing.stop();

 await browser.close();

For this test we are looping through our Google search 10 times, but more importantly we are starting a trace prior to the automation with the line:

await page.tracing.start({path: 'trace.json',screenshots:true}); 

With this line of code we’ll be creating a trace.json file of the entire automated session, including screen prints.  From there we can load that file into Chrome dev tools and manually troubleshoot, or automate further by parsing the trace file programmatically and proactively identifying performance issues.  

Here’s what the trace file looks like when I manually load it into Chrome:


Although Puppeteer provides functionality similar to Selenium, it is not meant as a replacement.  Selenium provides a single common API for performing browser automation across all major browsers.  Puppeteer targets only Chrome and Chromium, and it’s strengths include a broader set of services, and an event-driven architecture that allows for less test flakiness and failures. 

Feel free to take a look at my github project that contains all of these examples.  Give Puppeteer a test drive and make Chrome dance!

Using Firebase for Push Notifications

By Tom Hartz

Firebase is an mobile app development toolkit published by Google that can perform many different functions.  In this article, we will look at how to implement Push Notifications using Firebase Cloud Messaging (FCM).

Account creation and Project settings

To get started, you will first create an account and configure your application using Firebase console.  After creating the project, open Project settings (gear icon) and then follow the steps for adding iOS and Android apps to the project.  During this setup you will download config files for both platforms: [GoogleService-Info.plist] for iOS and [google-services.json] for Android.  The config files are used by firebase at runtime to link your app to your firebase account. You will need to bundle these config files and add the Firebase SDK to each app.

For iOS, there is additional configuration required to enable push notifications.  In the Firebase console Project settings, select “Cloud Messaging” and the “iOS app configuration” section should appear.  Here you have two options for configuration: APNs Authentication Key and APNs Certificates.

Using certificates is tricky.  You first create the certs for both development and distribution from the Apple developer portal, then need to download and export them as .p12 files from your Keychain Access app (Mac OS is required here).  After getting them uploaded correctly, there are still headaches to this approach. Using the Firebase console to send test notifications uses the Development cert, however if you use the FCM API to send notifications from your own backend it will always use the Production cert for the notification (regardless of you having your own development backend environment).  This means that the notification will only appear on your device if you have signed the app for Distribution, which is problematic for developing and testing internally.

Instead of having to deal with these signing compatibility issues, the recommended approach is to use an APNs Authentication Key.  You need to create just one key in the Apple developer portal, export it as a .p8 file from your keychain, and upload it to the Firebase console.  This key is compatible with both Development and Distribution provisioning profiles which simplifies things considerably.

App code & Firebase Tokens

In your app code you will need to make a call to initialize the Firebase SDK during startup.  For native apps, there is code provided in the app creation step. For hybrid apps, there are plugins available that handle this for you (I have used and recommend this plugin).  Additionally for iOS, you will need to turn on the Push Notifications capability in Xcode.  You will also need to make a call to the SDK method “grantPermission” to allow notifications.  Invoking this method will trigger the standard prompt at runtime in your app: “AppName” Would Like to Send You Notifications (Don’t Allow / OK).  For Android, you will need the Firebase service registered in your manifest file (and for hybrid apps, this same plugin handles that for you as well).

Firebase tokens, also sometimes referred to as Registration ID’s, are special identifiers that uniquely identify a mobile device and will be used to send our notifications.  Your app needs to use the sdk to call “getToken”. You will need to provide logic to handle saving and uploading the token to your backend. You should also use the SDK to register a callback function with “onTokenRefresh”.  Be aware that Google may at some point expire an FCM token, and in that event the onTokenRefresh method will be invoked to inform your app of the newly generated token. Your app should handle replacing and updating that token through to your backend as well.

Notification Icons

On iOS the notifications will appear on the device and display them with your app icon.  For Android, push notification icons are different from the app icon and must be provided separately.  The standard here is to use an alpha grey scale image, and apply a color to the icon using either hard coded in the AndroidManifest.xml file or dynamically in the notification payload.  Something to keep in mind with Android icon color is that you do not have full freedom to use any color! Whatever color you set will be overridden by Android to ensure that it has a good contrast.  In my example below, the color I specify in the FCM payload (#96c93e) is changed when displayed on my Android phone (#508400).

Android Studio provides a tool for generating the different icon resolutions.  Right click on your project folder and select New > Image Asset. Select “Notification Icons” for the Icon Type, then for Asset Type choose “Image” and choose a file for Path:

You then can specify the dynamically sized icon to use in the AndroidManifest.xml file:
       <meta-data android:name="" android:resource="@drawable/notification_icon" />

Sending Notifications

There are generally two options for generating push notifications from your backend: SDK or rest call.  The Firebase Admin SDK is available for a variety of languages.  The documentation here will instruct you on how to send push notifications to individual devices.

Alternatively, you may use the FCM REST API (“raw protocols”) to send notifications using a standard POST web request.  There are currently two versions of the REST API. The legacy version is the only one that supports “multicast” messaging (send a notification to multiple devices with one POST).  You must include your Firebase Server key (found in Firebase Console > Project settings > Cloud Messaging) in the auth header. An example request looks like this: 

Content-Type: application/json
   "registration_ids": ["YOUR_FCM_TOKEN_HERE"],
   "content_avaliable": true,
   "priority": "high",
   "notification": {
       "title": "Notification Demo",
       "body": "Testing push notifications!",
       "color": "#96c93e"

With everything configured correctly, you should be able to send push notifications from your server and see them appear on your device!

Code Coverage - a false sense of security

By Ed LeGault

Code coverage is a very common metric that can be used to determine code quality.  It can also be used to make sure that unit test percentage numbers do not go down over time.  However, there are a few reasons why coverage checks can be misleading and also provide a false sense of security.

Code coverage is generally a good thing because it can measure how much your code is executed when being tested.  If coverage numbers exist, a build can be gated based on certain coverage thresholds. This also means that if you are measuring coverage then you actually have tests!  However, code coverage numbers can be misleading because coverage of code is literally that the code was executed. This doesn’t mean anything is actually verified nor does it mean that notifications are sent if faults are detected.  It also doesn’t verify that each unit of code has a test because dependent code can cause lines to be covered that should otherwise be tested by a specific set of unit test scenarios.

The easiest and quickest thing to do that will help to ensure the unit tests are reliable is to implement code reviews of the unit tests.  Code reviews of unit tests do not need to be quite as comprehensive as reviewing the code but are equally as important. While reviewing unit tests, some things to consider are:

  • Make sure code has a test and is not being tested by something else
  • Make sure verifications are present in some way
    • If functional testing, verify outputs are expected (black box)
    • If logical testing verify, expected things are happening (white box)

The second thing that can be done to help eliminate false positives in code coverage numbers is mutation testing.  Mutation testing is a practice in which the code is instrumented and changed in a manner that should make every unit test fail on purpose.  Code mutations will change things like if (foo > 0) to if (foo >= 0) with mutators. Mutants that survive are considered failures. This helps find edge cases and missing verifications that might not be found during traditional coverage checks.

There are mutation testing frameworks available for a variety of languages, as well as plugins for code analysis tools, such as sonar:

Some things to consider when performing mutation testing are:

  • Can make the build take much longer.  Instrumentation for mutation takes a lot longer than instrumentation for coverage.
  • Using frameworks like PowerMock will either not work or cause faulty results because both frameworks instrument the code they are testing.

Implementing code reviews of unit tests and performing mutation testing will help ensure that code coverage metrics can be relied on because if code is covered that doesn’t mean it is tested.


A (belated) IT Thanksgiving

By Andrew May

Most Thanksgiving articles are about family, friends and food, but I thought it would be interesting to think about what we should be thankful for in our IT careers.

Open Source

Open source software is everywhere today - from phones to browsers to operating systems, and the huge choice of libraries we rely on to develop almost every application. This has been a significant change since I began my career when it was considered a niche ideology and not necessarily trusted in the IT industry. Fast forward to today and both the Java development kit and .NET core are open source, and it would be very unusual to write an application without relying on third party open source libraries.

You can show your thanks for Open Source software in one or more of these ways:

  • File a bug report: If you have an issue don’t just complain about it, create a bug report providing enough information so that the project maintainers can recreate the issue.
  • Fix a bug or add a feature: There are many opportunities to make changes in smaller projects. I’ve made enhancements and fixes to a range of different projects, usually because I was using a library or utility and wanted it to do something slightly differently.
  • Write an open source project: It doesn’t have to be anything large - there are many very small projects that are widely used because they meet a need (see the average Javascript/Node project).
  • Donate: some large (e.g. Mozilla) and small projects are funded by donations. Sometimes it’s just giving enough to pay for a coffee or beer to show your appreciation.
  • Evangelize your favorite open source projects

Moore's Law

The increase in computing power available has made many things possible. As more of our lives are lived online it’s been necessary to build systems that can cope with massive numbers of concurrent users. Fields of computing such as Machine Learning have become practical because we finally have the resources to implement algorithms that were conceived well before we had the capability to utilize them.

The jump in computing power between my first computer (a Sinclair ZX81) and the laptop I’m writing this on is hard to fathom.

Public Clouds

Cloud Computing has been called a “Democratization of Technology” because it makes available the same resources to any developer whatever project huge or small they are working on. It’s possible to experiment with different technologies, only paying for what you use.

I’ve worked a lot in AWS, but I’ve been able to try out both Microsoft Azure and Google Cloud Platform making use of generous free trials. I attended the free Global Azure Bootcamp last year, and received even more credits.

IT Community

I hate to admit how often I end up on Stack Exchange searching for answers. However I’m very grateful for all the time people have spent asking, answering and editing questions. There’s also all the people writing documentation and blog posts.

More locally, there’s a large number of user groups and meetups in the Columbus area for all different areas of IT. We also have a good number of local and regional conferences, including CodeMash, Stir Trek, DevOps days and NFJS.

Finally, I’m thankful for all my great colleagues at Leading EDJE.

Test Data Strategies for a QA Automation Framework

By Dennis Whalen

When building a QA test automation framework, it's critical to define the strategy for dealing with test data.  For example, what’s wrong with this scenario?
Given the user successfully logs into registration system
When the user searches for course number “ALG-4316”
Then the course is displayed
And the course has title of “Intermediate Algebra”
And the course has credit hours of “4”

Clearly the scenario contains hard-coded test data: course number, course name, and credit hours.

When first building test automation it's easy to start like this.  We're testing in the dev environment, we know this course exists in the database, so let's start building some automated tests!  Although it's easy to get started like this, it can quickly become impossible to sustain.

Let's take a look at some alternatives for dealing with test data.

Option 1 - Hard-coded test data


The architecture behind the hard-coded test data option is not too difficult to understand.  With this option, the test data is defined within the test process and stored in data files or hard-coded directly in the test.


  • Start building and completing tests quickly 


  • Long term test maintainability and brittleness
  • Issues running tests in different environments/databases
  • Could require a database restore strategy prior to running the test

When appropriate?

Although all tests may contain some hard-coded test data, it’s really only appropriate for [1] static data that will not change across environments, for [2] data that is not relevant to the test, or for [3] test setups where you can always restore the database to a known state before testing.


OPTION 2 - Real time test data extract

Let’s try the gherkin again and remove the hard-coded test data:

Given the user successfully logs into registration system
When the user searches for an active course
Then the course is displayed
And the course title is correctly displayed
And the course credit hours are correctly displayed

With this option we are no longer defining the test data in the test.  The automation code that’s executed behind this test will be responsible for identifying the necessary test data and expected results.  But how?


With real time test data extract, the test process will extract the necessary test data from the application as we execute the test.  So instead of hard-coding the course number, we can first extract a valid course number and associated course detail from the application, and then use that data in our test.

As shown in the diagram below, the data can be extracted via an existing application API or by reading it directly from the application database.


  • Tests are not reliant on specific hard-coded test data
  • Tests will work across multiple test environments 


  • Need a reliable mechanism to read from the application (API or DB)
  • Assumes the database already contains data you need
  • Could have issues with concurrent tests grabbing same test data


When appropriate?

This option is appropriate when [1] the necessary test data exists in the database, [2] there are no concerns with concurrent tests potentially using the same data, and [3] we have read access to the database, either through API or direct database access.

OPTION 3 - Real time seeding


With real time test data seeding, the test process will inject the necessary test data into the application as we execute the test.  So instead of testing with an existing course number, we can first inject a valid course number and associated detail into the application, and then use that data in our test.


As shown in the diagram below, the data can be injected via an existing application API or by inserting it directly from the application database.


  • Tests are not reliant on specific hard-coded test data
  • Tests will work across multiple test environments
  • No data sharing issues with concurrent tests  


  • Need a mechanism to write to database (API or direct)
  • Seeding data may be more complex that just extracting existing data

 When appropriate?

This option is appropriate when [1] the necessary test data may not already exist in the database, [2] tests will run concurrently and can’t share the same data, and [3] we have write access to the database, either through API or direct database access.

Performance testing

The options describe above may work well for functional testing, but what about performance testing?  With performance testing we’ll simulate multiple concurrent users to verify the application can handle the required workload and respond within the expected response times.

Interacting with the application to acquire test data during a performance test will just apply more load to the application.  This load is not representative of a real-world scenario, and would likely skew the test results and server performance statistics.

To get around this issue we can decouple the test data creation process from the testing process.

As represented above, the necessary test data is injected into the application prior to the start of the performance testing.  In addition to injecting the test data, we’ll also write that test data to a data repository on a server that is separate from the application.

When the performance test starts, it will read test data from that test data repository.  This strategy will allow the performance tests to be driven by test data without seeding or extracting it during the test.

Although this process is probably the most complex method for dealing with test data, it does provide an additional bonus; the test data generated for performance testing can also be consumed by the functional UI and API tests, as depicted above.


As you define the test data strategy for your project, there is clearly not one correct way to do it.  In fact, you will likely use various options on the same project, depending on the needs of specific tests.

Regardless of your strategy, be sure to think about it up front, and refine it as you learn more about what works and what doesn’t.

iOS Code Signing and Deployment, Explained

By Tom Hartz

For the uninitiated, deploying apps onto iOS devices can be a daunting and downright confusing process. When developing a mobile app, simulators are great for rapidly going through the code-test loop.  However it is still a good idea to test your app on physical devices early and often. Eventually you will want to distribute versions of your app for both internal testing and for production releases as well.  XCode has improved quite a bit over the years, making this a lot easier than it used to be. This article will attempt to make clear the requirements for deploying to iOS devices and describe the process of getting an app onto devices.

To deploy applications on iOS, you will need a computer running OS X (MacBook).  You will also need to enroll in the Apple Developer Program, for which Apple charges $99 per year.  You then will have to register devices manually in the apple iOS dev website, or automatically through XCode.  Note that you are limited to registering 100 devices of each type, to which you can deploy directly via USB (100 iPhones, 100 iPads, 100 Apple Watches, etc).  When you connect a device through USB, XCode can automatically create a “provisioning profile” for your app, which is essentially a digital signature that allows the app to run on a device.  Development builds that are deployed in this manner expire after 90 days.

Another way to deploy dev builds is with Apple’s Test Flight.  You can create a Test Flight configuration where you can build your app and upload the ipa file (app bundle) to Apple.  Each version of your app can then be released to your team for testing. Here, the limits are 1000 users with 10 devices each.  Test users will first need to register and install the Test Flight app, through which they can access and install the released app versions. Again, builds deployed this way will only be able to run on the device for 90 days.

To deploy to the app store, you must build and sign your app for release, and upload it to Apple.  You can then configure an app submission through App Store Connect (formerly iTunes Connect).  Once the team from Apple has reviewed and approved your submission, your app will appear in the app store and be available for download. Releases deployed this way will not expire.

To deploy releases without the app store, you will need to enroll in the Apple Developer Enterprise Program which costs $299 per year.  This is a separate account, in addition to the $99 Developer Program if you still want to deploy to App Store!  The Enterprise program will allow you to create provisioning profiles that can be deployed to devices without the app store that do not expire.  These builds can be deployed by hosting the app with an internal website/download link, and via USB or third party tools like HockeyApp.

We are all developers

By Ed LeGault

"But I am not a developer!" - This is a quote from a person on an ops team at my current client.  This lead us into a lengthy DevOps discussion.  Leveraging tools and techniques that allow for configuration as code, along with breaking down barriers in the development life-cycle, make this a key point when trying to change a culture during a DevOps transformation.

Many tools and technologies now exist that provide the ability for everyone involved in the project to be a "developer".  Automated tests can now be written as code and then be executed during various steps within a CI/CD process.  Infrastructure can be declared in yaml files when using tools such as Kubernetes or for cloud deployments such as AWS CloudFormation.  This means that everyone has to manage source control, versioning and testing.  Test automation engineers (QA) and infrastructure engineers (Ops) now need a environment to develop and test their work.  These steps in the development life-cycle were previously thought of as things just needed for "development".  Quality checks and code reviews are also steps in the development life-cycle that must be followed across the entire group.  A yaml file change or a Ruby test change is just as important as changing java or javascript. 

A key outcome of achieving a DevOps culture is to bring traditional development, automated testing (QA) and Operations together into one cross functional team.  Now that this team is also writing, reviewing, testing and versioning there is now the need for everyone to consider themselves "developers".  Everyone on the team is producing something.  We are all developers.


Planning a Company Innovation Event

By Ned Bauerle

Regularly holding innovative events (at least annually) can have many benefits to your company and your associates. Associates get to learn and grow together and you can advertise it as a benefit which can help when recruiting good talent and/or retaining associates you already have.

Holding an event will be an investment in your organization and in your associates.  It will be difficult to calculate the return on investment for the event so realize that it will more likely be an investment to your company culture.  Innovation events have the potential to generate new product features that you may have never conceived. Your associates will feel that you are investing in them by giving them an opportunity to learn something new and ultimately allowing them to prove some of their ideas to you.

It will be important that the management completely “buys” in to the event and exhibits positive energy.  Avoid setting any expectations for yourself or others about any certain outcome; rather, go in with an open mind and see what the team can do for your company.  People genuinely want to do a good job and help the company to thrive. 

Before you communicate the details of your innovation event to your associates there are a few things you need to prepare ...


You need to determine what dates you will hold the event and how long the event will be.  It is important to allow time for your associates to innovate as teams for this type of event, typically with dedicated consecutive days to become fully engrossed in their project.  Two to four days is a good length of time. If it is hard to take four consecutive days off then you might break it into two sessions with 2 days each week. Identify dates where typically the business is slow so that you can allow the teams to focus on innovating without being distracted by other business needs. 

We plan our innovation events to be four days at Leading EDJE.  Three days of full on development and then on the fourth day we use ½ the day to prepare presentations and ½ the day to present their projects.

Once you block off the dates for your event you can procure a location for the event.  Find a space where the team(s) won’t have any interruptions, typically a location away from the office is best or at least in a space that is not typically used for daily work.  There are some innovation spaces that are designed and decorated to inspire, many of which provide treats to keep energy levels up and to promote creativity. 

At least 6 weeks before the event you should choose a theme and/or title for the event.  Having a theme will set the stage for your event and allow you to differentiate year after year.  A cool or witty title like “Monster Tech Mashup” or “Hack Fight” can start creative juices going and might also put a little competition into the event.  If you are drawing a blank there are tools like that can help you. 

You can drive excitement for the event by making it a competition.  You might supply a prize for the winning team, or you perhaps just bragging rights.  If you plan to make it a competition you will need a way to measure and/or compare ideas.  At Leading EDJE we typically ask the teams to evaluate the other team’s ideas using a scorecard (based on the theme).  You should determine what criteria will be used in the scorecard and if there are any surprise categories that will be kept secret until voting occurs.  Once you have a way to compute team scores you should prepare a message describing the guidelines so that the teams know what is expected. You can then distribute the guidelines prior to the event which will ultimately avoid sore feelings that a team may not have understood the rules. 

Send a message to your associates describing the event and requesting everyone to participate including the guidelines.

5 to 6 weeks before the event ask your associates to come up with and submit project ideas that are company and / or industry related.  If you don’t have a flurry of project ideas you should hold a brainstorming lunch or happy hour to promote collaboration.  You can have tables or posters around the room where people can gather and job down technologies they would like to work with, improvements to your products, and ideas they have seen in other industries. 

Make sure you tell the teams how much time will be available to build out their idea, not that they should restrict themselves to ideas that can’t be fully implemented in the allotted time frame, but ideas that can be prototyped for a demo.  The teams will also need to have an idea about how much money can be applied for project assets. They should provide a list of anticipated hardware, software, and network access as part of the project submissions.

4 to 5 weeks before the event you will need to establish teams.  You can either assign teams and allow them to select from the list of project ideas (this approach allows you to align people who typically don’t work together for example), or present project ideas and allow associates to sign up for a team.  If there are too many project ideas then you will need to have a process for narrowing the list down and you may need to establish team guidelines for minimum and maximum size.

2 to 3 weeks before the event and after the teams have been established you should let them know that they need to get together before the event to plan their approach.  By meeting prior to the event the team members will be able to do some research and hit the ground running on the first day. The team should also solidify what they need for the project (hardware, software, network access, etc.) so that you can get it prior to the event. 

1 week before the event confirm that everything is ready and that you have ordered and received any of the requested assets needed by project teams.  If you are providing hardware, software, food, treats, drinks, etc. make sure that you have everything set for the first day. Confirm that the location is set as you are expecting and that the proper network connectivity is in place.  Make sure that you have tables and chairs configured in a way that teams can collaborate but are far enough from other teams that there isn’t too much disruption. Double check to ensure you have enough power strips and extension cords that can reach each team.  Make sure that your awards, trophies, and voting ballots are ready for your showcase day.

Running the event

You should kick-off the event on the first day, perhaps providing breakfast.  Let the teams know that you are excited to see what they come up with and that they should let you know if they run into any issues with the requested assets you have provided so that you can help to resolve them.  Try not to tie up much time by making long winded presentations, just let the teams get right to it. You should budget a little extra money in case there are game day purchases that you need to make. If you are conducting voting you might put up slides to remind the teams how they will be compared to each other when determining the winner.

The Showcase

At Leading EDJE we prefer to have the teams demonstrate their idea at the showcase and request that each team member participate as a presenter for part of it.  Once the presentation is complete the other teams vote using a scorecard and commonly an additional team of managers or executive leaders provides a vote. We use one scorecard per team because commonly teams are varying in size. 

An alternative to team voting would be to select a panel of voters (for example some of the executives or partners).  Teams present the projects and ideas to be scored by the panel. If your company is medium to large then it is likely that the executives may never get to know the teams or their abilities on a personal level but if they get to participate on a panel in the innovation event then they can directly interact with your team.  If you have ever tried to sell ideas upward you probably understand that it can be difficult to get buy in if the upper management is thinking of your team as a commodity rather than people, putting a face and showing off their capabilities can help to break down those barriers to communication.

Once the presentations are complete finalize the scoring and announce the winners.  Present any awards or trophies.

After the event ... (share the knowledge)

Ask the teams to record what they learned, the idea they had, and the technologies they used on a company shared space or wiki.

All in all we have had great success with our innovation events at Leading EDJE. Our entire staff is excited every year as the excitement builds for the event. The ideas produced are never shy of amazing. We love to see what new and creative ideas our team has every year and it is a ton of FUN.

API Testing with Postman

By Dennis Whalen

If you’ve worked in QA automation for awhile you have no doubt heard of the test pyramid.  The test pyramid is a diagram used to visually depict test types, and to give some general guidelines on how many of each to create.

As you can see from the pyramid, a well-rounded automated test suite should have a large number of unit tests, fewer API or integration tests, and even fewer end-to-end UI tests.  Unit testing is the handled by the developer, UI testing is handled by QA, and API testing is typically a somewhat shared responsibility.

At times a QA team will focus on UI testing and ignore API testing.  It’s easy to understand the infatuation with UI tests:

  • UI tests are easy to map to user stories.
  • The business user can easily relate to UI tests.
  • Most folks working in QA automation have more experience with UI automation.
  • UI tests are a lot more fun to demo compared to a command line API test.


So why do we want more API tests that UI tests?  I can think of at a number of reasons:

  • API tests allow for more code coverage, since it’s easier to test different code paths and error situations.
  • API tests run much faster, so we can get faster feedback to the developer.
  • API tests can find bugs earlier, before the UI layer is created.
  • API tests can be created before the API is developed by using mocking frameworks.
  • API tests are less brittle, therefore less costly to maintain.
  • API tests are necessary for APIs that don’t have a UI, such as APIs that are consumed by IoT devices or other services.


There are a number of tools that can be used to test APIs, such as SoapUI and REST Assured.  Another is Postman.  I have used Postman for a number of years for quick manual testing of REST services, but only recently have I really started looking at everything it provides.  So what is Postman?

Postman is an HTTP client marketed to both developers and testers to support development and testing of APIs.  Postman is available as a standalone client, and also as a Chrome app.  Google has announced plans to end support of Chrome apps in the near future, so the native standalone app is the way to go.


Everything with Postman starts with collections, which is how you store and categorize your individual API requests.  Collections allow you to organize your API requests in folders and subfolders, and the requests can be run together in Postman via Collection Runner.

Pre and post scripts

Postman allows automation developers to use JavaScript in any API request to interact with the API request, API response, and global and environmental variables.  These scripts are typically used to pass data between related API calls in a collection, and to verify the results from the API request match the expected results. For example:

Verify the response status code:
pm.test("Status code is 200", function () {;
Verify the response time of the request:
pm.test("Response time is less than 500ms", function () {
Verify the data returned by the request:
pm.test("Customer Name Updated correctly", function () {

Environmental variables

As you build and run tests, you’ll typically need to run the tests in a number of environments, such as local, dev, test, UAT, etc.  Postman provides an easy mechanism to define specifics about each of your environments and to quickly switch between them.

Test and develop in parallel

One of the drawbacks with UI testing is that you need the developer to develop the functionality before you can really start building the the guts of the test automation.  With API testing, you can start testing when the API is complete, even if the front-end components have not been started.

Additionally, with mocking you don’t even need the API to be complete.  Postman provides a mocking service that allows you to define an API endpoint with the expected input and output.  When the sprint starts you can build and run your automation tests against that mock service.  Once the development team completes the API, you can just point to the API and run your test.

Proxy server to capture API calls

If you’re looking to quickly identify and build a test process for an existing business transaction, Postman provides a mechanism to capture the API traffic for a particular transaction and store the API calls in a Postman collection.  Just turn on Postman’s transaction interceptor, manually step through the transaction via the UI, and the API calls will be captured in a Postman collection.

Automating API test with Newman CLI

Newman is a command line tool, written on Node.js and stored in the NPM repository.  Once it’s installed, you have the ability to run a Postman collection from the command line.  With Newman you can incorporate API testing and custom reporting into the CI/CD pipeline.

Team Collaboration

One of the key features of Postman Pro is sharing and team collaboration.  Postman collections can be shared and edited by authorized team members via a team API library in the cloud.  The team activity feed allows you to see all modifications made to a collection and to rollback to a previous version when necessary.


Postman comes in 3 pricing plans: Free, Pro, and Enterprise.  The core features for testing are included in the Free version.  The Pro version offers options for team collaboration in the cloud, while the Enterprise version includes extended support and higher cloud usage limits.  If you’re just getting started, the Free version will give you everything you need, and there are trial versions of Pro and Enterprise when you are ready.

Next Steps

In this post I have only scratched the surface regarding some of the cool features that Postman provides To learn more about Postman and API testing, the Postman website is a great place to get started.  Also, Postman has a yearly tech conference and the sessions and talks from last year are available to view for free.

Finally, don’t get so enthralled with UI testing that you lose sight of API testing.  Be like the test pyramid; spend a lot more time with API testing than you do with UI testing!

Will an AWS Certification get me a high paying job?

By Andrew May

According to Betteridge's law of headlines, Any headline that ends in a question mark can be answered by the word no. Personally I think it's a bit more complicated than that, and the answer is a qualified Maybe.

There's a lot of hype about AWS Certifications and how they will increase your salary and make you sought-after in the workforce. Here's an example from the summary of a training course:

Each of the AWS certifications commands an average salary of more than $100,000.00, with the average salaries of AWS-certified IT staff 27.5% higher than the salaries of their non-certified counterparts.

While these figures may be true (like all statistics given with no details of where they came from, they should be taken with a large pinch of salt), it suggests that a few hours of training will reap great rewards. I recently presented at the AWS Community Day Midwest, giving an overview of all the AWS Certifications. Afterwards I got many questions about whether the certifications would help in getting a job, so there's clearly a lot of expectations around the certifications.

Certifications provide a measure of your level of experience with AWS, but it's possible to study and pass the Associate certifications with very limited hands-on time with the platform. The Professional certifications are at a significantly higher level and require a depth of knowledge that is hard to get without using AWS for production workloads. If you've used AWS in one job and you're looking for a new job, a certification may be a good way to round out your knowledge and record your experience. I recently studied for and passed the SysOps Administrator associate certification, and learned a few new things that I was able to immediately apply at my current client.

When we interview candidates for Leading EDJE we are looking for people with strong technical skills in a range of technologies, and not necessarily looking for a fixed list of skills for a particular role. If I'm talking to a candidate with AWS experience or a certification on their résumé, I'm likely to ask them questions about the platform. If they've only studied for the certification test and don't actually understand or follow the best practices from the certification, that gives a negative impression of their general technical expertise.

For other positions with specific job requirements where AWS experience is either required or nice to have, then a certification is likely to help you pass the resume screening process, but it will depend upon the interviewer(s) whether it actually helps you land the job. If a position requires hands on experience with AWS then a certification by itself may be insufficient, but it does demonstrate a willingness to learn.

One other area to consider is the applicability of the certification for your role. Some people take the Solutions Architect associate certification because it supposedly commands higher salaries than the Developer associate certification. However, the Developer certification (in particular the new version just released in June 2018) goes much more in depth into the details of developing with certain AWS technologies than the Solutions Architect certification, and is likely to be more relevant for developer positions. Of course you can take several or all the certifications, but you will find there is a lot of overlap between them at the associate level.

You should be conscious that the certifications only cover a small fraction of the services that AWS offers. AWS is constantly adding new services and making enhancements to existing services, so what you learned for the test may no longer be correct. If you're working in the AWS environment and following updates then you will be aware of many of these changes, but if you used the certification to learn about AWS and don't use the services on a regular basis you might get caught out in an interview.

In summary, the certification may help you get a job, and may allow you to command a higher salary, but don't believe all the hype.

Create a Culture of Innovation

By Ned Bauerle

It can be exhausting trying to keep up with market trends, particularly driving new features or improvements, using traditional business driven practices. Customers demand products that are evolving at a quick pace or they get bored and may move on to a competitor’s product. The goal today is to deploy new features, fixes, and improvements as quick as possible in order to keep up with customer expectations.

As agile methodologies and good DevOps practices create a foundation to quickly develop and release products many companies are realizing that they need to increase the pool of ideas to keep up with the pace. Several Agile LEAN frameworks are designed incorporate a sprint of innovation and planning such as the SAFe (Scaled Agile Framework) where every 6th sprint is exactly that.

The point is that you need to utilize the talent you already have within your organization, your product development teams, by creating a culture of innovation. Let’s look at a few examples of companies that lead the pack for innovation practices such as Apple, Google, and Netflix.

Apple has been at the top of the list for companies that are the most innovative for many years running. They attract amazing talent because people genuinely want to work on their products. They encourage everyone to contribute new ideas and have no limits on how they invest in innovation. Good talent is attracted to Apple rather than Apple seeking out the talent.

Google has 20% time, where 20% of working hours from each employee are expected to be applied to innovative ideas that may benefit google.

Netflix has a culture of fully trusting employees to do what is right for the company even going as far as not having required hours of operation or PTO policies. They focus on hiring the right people and trusting them to do what is necessary. There is room to experiment and makes mistakes without repercussions.

Smaller companies and organizations who are just starting to cultivate an innovation mindset find it hard to implement strategies similar to these behemoth size organizations who have enough staff to cover regular operations while letting them innovate part time.

There are other less intrusive ways to start building a culture of innovation, one of which is holding an innovation event or hack-a-thon with your associates.

Get started by first establishing a definition for Innovation in your organization, emphasizing that innovation does not always mean creating something new, but rather it is about creating new value.

Service-based architecture patterns such as microservices enable applications and functionality to be mixed and mashed in ways that would have been difficult as recent as 5 yrs ago. Your development teams know how your applications are put together and what service APIs are available.

Encourage your team to think about the services and applications that they have created and how they might be able to leverage services from different products to create new value. You will be surprised at the ideas that the team(s) will produce about how those services might mix and match to create new value.

Attracting good talent for your organization is challenging and keeping up with customer expectations will quickly exhaust your ideas. Offering opportunities for innovation and continuous learning are ever more important to set your company apart from others. Get started planning your own innovation event today !!!

Specializing the specialist - Breaking up your testing roles

By Anthony Zabonik

Software Quality Assurance Engineering and Test Automation Development aren’t new concepts, but as the Agile paradigm continues its reign as the golden goose of SDLC, this 20+ year old systematic process has seen a massive uptick in it’s necessity. Simply put, development shops that have introduced automated testing into their development cycle are not only getting their products out faster, but they’re releasing with significantly less defects. As adoption continues to grow, it’s becoming more effective for a development shop to break their QA roles into multiple sectors of responsibility in a similar fashion to how Application Development roles have been broken up (Front-End, Back-End, etc.). The key difference being that instead of breaking into visible or functional parts of an application, QA roles can be broken into various stages of the software development lifecycle. We’re going to break down those various QA roles into three specific sections: QA Analyst, QA Engineer, and SDET. Full disclosure: the names of these roles are going to vary between companies, markets, and regions, the important takeaways are the behaviors and responsibilities.

QA Analyst

With a business-minded background, the main responsibilities of a QA Analyst involves tasks that aren’t usually considered automation-friendly. These tasks allow a QA Analyst to leverage his or her insight into the business requirements of a feature by utilizing a compilation of manual, exploratory, and usability test strategies. Despite the extensive rise in automation, this is still necessary as in many cases a project will be integrated with multiple systems that a development team may not have control over and adding automation would just needlessly increase maintenance overhead. While this role typically has less requirement for the actual development of automated tests, it’s not uncommon to see a QA Analyst contribute to the automated end-to-end testing effort in some degree. Additionally, a QA Analyst may also be included in vetting requirements during sprint planning or feature grooming, as having QA personnel present in these meetings aid in ensuring the quality and testability of feature requirements.

QA Engineer

A Quality Assurance Engineer is going to be a team’s general test automation developer. Typically coming from a software development background, QA Engineers will be responsible for writing and maintaining automated end-to-end functional and acceptance tests that likely pair with whatever feature is being developed in a given sprint. This is predominantly going to be black box testing as they typically develop with a browser automation tool like Selenium, which is used to simulate a real user stepping through an application’s workflow. Seen as a utilitarian role, on smaller teams a QA Engineer can also take on the responsibilities of a QA Analyst as well helping maintain the test automation infrastructure or it’s CI/CD integration. Ideally QA Engineers should be embedded in feature development teams, focusing on a specific project.


The role of a Software Development Engineer in Test (SDET) is basically that of a devops-minded application developer that works in the system automation realm. They are responsible for setting up and maintaining the testing infrastructure, integrating the test suite or suites into the CI/CD pipeline, setting up test reporting, and building performance, resilience, or security tests where needed. SDETs may also have responsibility for building and maintaining any testing tools the QA Engineer or QA Analyst may use. This typically comes in the form of custom testing libraries or testing-specific services. While not always embedded in a project-based development team, SDETs are also common in Platform Teams, which are responsible for creating tools and environments that increase developer efficiency.

Faster defect feedback means faster development, this is critical in building high quality fast-to-market solutions. By specializing your Quality Assurance roles, you’re allowing your QA team members to focus on what they do best (makes them happy), you’re enabling faster feedback during development (makes your application developers happy), and you’re ensuring each feature sees extensive testing before release (makes your customers happy).

Using Docker for more than packaging applications

By Ed LeGault

Docker is by far the most popular software for containerization of applications. Containers make it easy to package, release, deploy and execute an application stack in a consistent and repeatable manner. However, there is more to Docker than packaging and orchestrating an application stack. Docker can also be used to build code, execute scripts and do things like data migrations.

There are many build tools, such as Jenkins and Bitbucket, that now support running steps of a pipeline within a running docker container. This has the following advantages:

  • Build dependencies like node, maven, and even java are no longer needed to be installed on the build server
  • Application teams can dictate and maintain their build dependencies
  • Developers can build code exactly the same way the build server does

Code can be built via docker by running a container and volume mapping the source location. A few examples are:

  • docker run --rm -v $(PWD):/usr/src/app -w /usr/src/app node:4 npm install
  • docker run --rm -v $(PWD):/usr/src/app -w /usr/src/app maven:3.5.3 mvn clean install
  • docker run --rm -v $(PWD):/usr/src/app -w /usr/src/app gradle:4.7 gradle clean build

Docker can also be used to execute other things that are needed for a release such as scripts or data migration utilities. These changes can be packaged as a Docker image that is versioned along with the release they are needed for. This allows the installation of a release to be repeatable and testable via automation. Packaging and executing these tasks via Docker provides the ability to virtually remove the need for an installation playbook because those steps are authored within a Docker image. If scripts are packaged in an image named "migration-utils" you can run them and even pass parameters to them. For example, if you want to execute a script named " prod" in your "migration-utils" image you would execute the following:

  • docker run --rm -v local-dir:/path/to/files migration-utils prod

Notice how when you override the entrypoint the argument "prod" is now sent as the argument to the defined entrypoint. In this example you specify a volume that represents where the files are that the script needs access to.

Using Docker to execute a build or run a set of scripts is a key reason to use Docker in the first place. The repeatability of these tasks and being able to deliver these tasks as code make Docker an incredibly useful tool when creating a CI or CD process.

Security Practices for Mobile App Developers

By Tom Hartz

A lot of my time writing software is spent concerned about things other than security. From the companies and projects I’ve seen, security usually ends up being either completely forgotten, or prioritized behind all the other important things like functionality, UX, test coverage, and performance. Managers and other stakeholders usually are motivated to spend just enough time and money on a project so that it is functional in the form of a minimum viable product. Security is often not a concern for them, until it is too late!

As developers then, it is our ethical responsibility to ensure we follow best practices and keep security in the forefront of our minds. As craftsmen we must strive for the applications and tools that we create to be robust and impenetrable to outside intruders. Architects should be thinking about this often, but ideally even junior level developers should be aware of the basic best practices and techniques. It seems prudent to talk about some of the security concerns around mobile application development.

Perhaps most obvious is that you should secure all API communications with SSL encryption. It’s generally best to just use HTTPS to transmit all data between devices and servers, but especially sensitive data. Sensitive data includes Personally Identifiable Information (PII), your user’s password, user activity, company trade secrets, etc. There can be exceptions to this of course. If the data is not sensitive, or if the communication is only occurring on internal private networks, then maybe you don’t need to encrypt it. But it’s not hard to use SSL. Keep in mind, SSL alone does not make your application data 100% safe. Attackers have many other means to steal data, so packet encryption on the network is just the first step. Also, if you’ve heard of Heartbleed, you know that even widely used SSL implementations can sometimes be faulty. But flatly, its 2018 people, secure your friggin’ endpoints!

Sensitive data should be encrypted “in-motion” (on the network through SSL secured APIs) and “at rest” (when stored in a database on a server or on a phone). Local storage is a very common feature in mobile apps, and data should be encrypted here as well. The only time sensitive data should ever be unencrypted, is temporarily in RAM, and on a screen displayed to an authorized user. Encrypting local storage is a security practice that makes sure a compromised device does not make application data readily available to a thief.

A handy way to store small bits of information securely in an app is with the device Keychain (iOS/Android). Keychain frameworks provided by Apple and Google essentially guard data so that it can only be accessed by the registered application at runtime. Now, there are older OS versions and jailbreaks out there where it possible to decrypt and read this application data, so you should always consider the tradeoffs and only store data on device when it is absolutely necessary. Is it really important that the user have access to this data offline? Can it be stored on the server and accessed via secure communication instead?

If you need to store more than a handful of secrets, you probably are using a more performant local storage option such as SQLite. This can also be encrypted! One approach is to encrypt/decrypt individual data fields on their way into and out of the data access layer of your application. However this is a bit cumbersome and more processing intensive than simply encrypting the entire database file, which your app can then decrypt into memory just once when it starts (the SQLite file will still be encrypted in the file system). DO NOT hardcode the encryption key(s) into your source code! I wouldn’t trust putting keys into a private repo either (Github has been hacked before). Ideally, each app instance should have a unique encryption key. You can store the encryption keys in the Keychain, or fetch them from an API. The latter implies that users would have to be online to do an initial login, then can go offline as long they don’t fully quit the app. Encrypting data this way can make it frustrating or impossible for an attacker to access when a phone gets lost or stolen.

When a device is known to have been lost or stolen, it is good to have a contingency plan. A remote wipe is possible from the OS, and also something that you can custom build into your app (push notifications can trigger background process to start). However you should also consider that the user might not be immediately aware that they lost their phone. For this reason you should make logins expire as reasonably quickly as possible. Personally, I never want to let a login last for more than a day. But again there is a trade off here of security vs. usability, and no one likes having to login over and over again. Side note, two factor authentication through SMS becomes quite useless when the attacker is literally holding the user’s phone. To that end, I don’t think SMS auth adds any protection at all for mobile apps, and I have a laugh whenever an app texts me a code on the device I am logging in from.

Another concern for mobile apps in particular is code security. Typically mobile apps are deployed via app stores, and available for download by anyone in the world. Maybe your app requires a login to do anything, so you don’t care if unauthenticated users install and run it. However, you should consider what a malicious “black hat” attacker can do with your app bundle. As an attacker, I can take your apk or ipa file, unzip it, and start inspecting the bundle contents for vulnerabilities. I might attempt to decompile and reverse engineer your binary code files. I could also deminify and tamper with your hybrid app javascript code. Doing so means I could read any hard coded secrets or endpoints, or “fake” a login to move past that screen. This is where “security by obscurity” fails, and why it’s so important to have proper encryption setup to protect your data and endpoints.

*DISCLAIMER*: Black hat practices are not something I have a lot of experience with, and I’m writing this section hypothetically as if I were a malicious agent, for the sake of argument and fun. I would never attempt this on someone else’s compiled application! ✌️😊

When writing your application code, consider it vulnerable. Always strive to Keep It Simple, Stupid™. A popular opinion these days is the less code you have, the better. However, you should also be careful when trying to AVOID reinventing the wheel. If you are bringing in third party frameworks, do your due diligence and READ THE SOURCE CODE! If you can’t view the source before baking it into your app, that is akin to taking candy from a stranger. You are putting a lot of trust in these third party strangers to give you something sweet that won’t disable or kill you, or worse - attempt to steal your identity and information.

Last thing I want to touch on briefly is social engineering. This is definitely the hardest thing to protect against as a developer. We can follow all the security best practices in the world and have good encryption put in to protect our apps, but when a user gets scammed and leaks a password, all bets are off. Once an attacker has valid credentials, they are freely able to use the app as if they were someone else.

For internal business productivity apps, it is really important then to design good authorization mechanisms. Make sure you understand the difference between authentication and authorization, and make sure you’re doing both right. You should meticulously discriminate on who can see what data. Make sure employees only have access to data that they need to do their job! And if you can, make sure your organization has good training and communication in place to help your users recognize and avoid social engineering scams.

For all you other mobile devs out there, I hope you learned nothing from this article, and rather that it reinforced some obvious things you already knew. It is all too easy to let security get swept under the rug on a project, and I’ve certainly been guilty in the past of focusing on the more fun parts. Security is a big topic and an area where the industry at large needs to continually keep improving. As security practices evolve, so do hacking techniques. Put another way, as hackers get better at cracking systems, we must also change our approach to security. It is almost like a dance, or a hamster in a wheel; it is a continuous struggle to design systems where digital information is safe. As a society, we are likely going to keep seeing major scale data breaches occur. And again, as developers it is in our best interest to keep abreast of security concerns, keep security in the forefront of our minds, and design our systems as securely as possible.

A Day of Azure for an AWS User

By Andrew May

At the recent Global Azure Bootcamp event held at the Microsoft office in Columbus, I was the only one with a laptop covered in AWS stickers. Despite a strong feeling of imposter syndrome, I actually felt right at home due to the large number of similarities between AWS and Azure. Perhaps more surprisingly, as someone with a Java and Linux background, the platform appears very welcoming to the code I write.

Most of the topics had a presentation and then a lab giving us the chance to get hands-on with the services discussed.

Infrastructure as Code using Amazon Resource Manager

This was listed on the agenda as "Advanced IaC with PowerShell and ARM Templates", and I had to do some searching to figure out what this was going to be about. It turns out that this is the Azure equivalent of AWS CloudFormation which I'm very familiar with and was a perfect example of things being similar but not quite the same.

A few things struck me in particular:

  • ARM creates resources in a Resource Group (making it seem a bit like a CloudFormation stack), but Resource Groups are used extensively without ARM, and a Resource Group could contain both manually created resources and resources created via ARM (but that is probably not a good idea).
  • By default ARM updates run in "incremental mode", and if a resource is removed from the template it will not be deleted from the Resource Group. Specifying "complete mode" will delete resources in the group that are not part of the template.
  • Templates are in JSON - having switched from JSON to YAML for CloudFormation, I would not want to go back to JSON.
  • Templates have a variables section that can store computed values to be referenced within the template, and there is a much larger set of functions available than in CloudFormation.
  • The example templates for creating VMs in Azure seemed a lot more complicated than their equivalent in CloudFormation. I'm not sure if this is because they were showing every possible option, or if there is less default configuration.


Everyone has to have an Internet of Things framework and Azure is no different. There appear to be a lot of similarities to the AWS offerings, but not having used them extensively it's hard to compare. Because we did not have any IoT devices to use at the bootcamp we used simulated devices, which is fine but ultimately turns it into a basic messaging demonstration.

This session did inspire me to download Windows 10 IoT core and install it on a spare Raspberry Pi 2 that I had at home. While the Pi 2 is a supported platform, it was so painfully slow that I lost any interest in doing any more with it - I'm sure it would run much better on a Pi 3.


Both AWS (Elastic container service for Kubernetes - EKS) and Azure (Azure Kubernetes Service - AKS) have managed Kubernetes services in preview. The difference with Azure is that it's accessible for anyone to use immediately, whereas you have to apply to join the AWS preview. This may be a difference in philosophy between the platforms - there seem to be a lot of services in preview in Azure that anyone can start using, presumably taking on a risk that the service may change significantly before it is fully released.

I've been using AWS EC2 Container Service (ECS) for a couple of years to run containers in production and it's worked pretty well for us but it is fairly basic in terms of scheduling, and everything is AWS specific. The promise of both EKS and AKS is the ability to define container deployments using the standard Kubernetes tools while running on a platform where you don't need to worry about managing (or paying for) master nodes.

I enjoyed the chance to play with AKS and use Kubernetes for the first time, using the generous free trial you get for Azure when you first sign up (we also got $300 in credit for attending the bootcamp which was very nice).

During this lab I had the chance to use the Azure Cloud shell. Running a bash shell in a browser window on Microsoft's cloud (in a tab of Microsoft Edge) shows a great level of support for Linux users in Azure. Interestingly the Powershell version of Azure Cloud shell has just switched to running on Linux instead of Windows.

Azure Serverless

Azure Functions are similar to AWS Lambda with a different set of supported languages but some overlap. Logic Apps (based upon BizTalk) allow workflows to be defined and systems to be connected without writing any code, and there's nothing really equivalent in AWS.

Because this was the last topic of the day it didn't have a lab so I haven't gone hands on with Azure functions yet to see if Java functions are as painfully slow to start on Azure as they are on AWS.


I really enjoyed the Global Azure Bootcamp and I'll probably attend next year as well. Thank-you to the organizers and Microsoft and the other sponsors (although the sponsor who gave away an Amazon Echo dot might want to rethink their choice of prize).

Both AWS and Azure offer such a large number of services that it's hard to compare the platforms. However, both offer a lot of similar building blocks and a lot of applications could be run equally well on either.

Testing Non-Functional Requirements

By Dennis Whalen

Back in the day, I was brought onto a project to lead performance testing activities for a web application that had been in development for a couple of years. The team had daily stand-ups but the project as a whole was run as waterfall and we had reached the "testing phase" of the project.

With performance testing, we needed to ensure the application could handle multiple concurrent users, as to that point far we’d only done single-user functional testing. In addition, the development had been conducted against a small database and we needed to conduct performance testing with a more "production-like" database. Before we started the performance testing it was important to setup a test environment that was sized and configured similar to production.

Once we had built out the large database, it was time to start throwing lots of users at the application, produce some reports, and move on to the next project! Except for one problem. We couldn’t move past the home page, with just a single user. All we saw were database timeouts and errors. Introducing a production-sized database had quickly brought the application to its knees. Although there was nothing in the plan for performance tuning, it was clearly time for a tuning phase!

Performance testing is a type of non-functional testing that validates the application conforms to the non-functional performance requirements, such a number of concurrent users, response time, error rate, etc.

In addition to Performance requirements, other non-functional requirement types include:

  • Scalability - does the application conform to scalability requirements by handling additional users and workloads without compromising the user experience?
  • Security - does the application conform to security requirements, by protecting access to the application functionality and data?
  • Capacity - does the application conform to the capacity requirements related to data volume capacity?
  • Reliability - does the application conform to reliability requirements related to uptime and application availability?
  • Maintainability - does the application conform to maintainability requirements related to the ability of support personnel to support, revise and enhance the application?

A typical enterprise development project will have requirements around these areas and it’s essential to have a strategy to confirm the application conforms to those requirements.

As you look at these requirements, early in the project, it’s important to ask three questions:

  • How can I validate the application conforms to these requirements?
  • How can I automate that validation?
  • How can I include that automation in the CI pipeline?

Most agile projects today include automated functional testing within the sprints, but there are still a lot of projects that don’t start focusing on NON-functional tests until it’s time for a deployment.

We all know the value of automated regression testing of functional requirements.  With automated regression testing we can find bugs quickly and address them before they fester and become a bigger issue.  The same is true with regression testing of non-functional requirements.

Testing tools and test strategies can allow you to validate an application’s conformance to these requirements, and build automated test scripts that can be included in the continuous integration (CI) process.

For example, some popular Performance test tools include Apache JMeter, MicroFocus LoadRunner, and Microsoft Visual Studio.  All of these tools allow you to define test scenarios, the number of concurrent users, load patterns, test durations, etc.  In addition, they all provide canned and customizable reports to communicate the results of the tests.

Security requirements are another critical type of non-functional requirement.  These requirements must be defined early, as they are critical input to defining the appropriate application architecture.  The Open Web Application Security Project (OWASP) is an organization of application security experts that focus on defining key application security threats and strategies for mitigating them.

The OWASP Testing Guide provides guidance and best practices for validating an application’s conformance to Security requirements and best practices. In addition, there is a broad range of Security tools such as the OWASP Zed Attack Proxy (ZAP) that can allow you to include automated Security testing into the CI pipeline.

So you’re probably wondering, what happened during our “tuning” phase? Of course the first step was to get the application to work with one user! Luckily most of our issues were isolated to the database. After weeks of iteratively tuning and testing we were able to clean up a LOT of performance issues within our SQL stored procedures.

It turned out that 90% of our performance issues were related to about 5 types of SQL coding problems. If we had started that testing at the beginning of the project, we would have caught those issues early and made sure they did not propagate throughout the application.

Just like with functional testing, automated regression testing of NON-functional requirements during the sprint will save you lots of time, money, and headaches down the road.

The Next Generation of Programmers

By Ned Bauerle

Our world is constantly introducing new technology and in many different forms. We have mobile cell phones that are more like mini computers than phones. We are able to pause, rewind, and fast forward our televisions. Our cars have computers to control various safety features and in some cases even drive the car for us. Face it we are becoming more and more dependant on highly technical devices which require complex programs.

It’s not a secret that many children follow in the footsteps of their parents when selecting a profession and if you are reading this article then chances are that you work in a technical field. Perhaps you want to teach your children about programming or perhaps you have been approached by a friend or relative who has children interested in learning to program. In any case the intent of this article is to help you get started and provide some resources for teaching the next generation of programmers.

Children are generally hungry to learn. We need to help them out and it is good to strike while the iron is hot, but how do you get started?

I bet that most of you are not elementary school teachers and have not likely gone through training on how to educate young children. Some of you are parents, but teaching children about technology (especially your own) is not the same thing as raising children. So here are a few general tips when teaching children:

Let the child learn. Some will pick things up quickly and some will take a few times before they “get it.” Either way is OK, everyone learns differently, just remember to be patient and let them learn in their own way.

It is easy to get frustrated when teaching children, just remember to keep your cool and be encouraging. No one does well when they are being yelled at but rather the opposite will happen and they will only remember that you were upset with them.

As a Cub Scout leader and BSA Scout Master I was trained to use the EDGE method which works really well especially with young children.

  • E xplain what you are going to learn / do
  • D emonstrate how to do it
  • G uide or coach them while they try it themselves
  • E nable them by letting them do it without guidance (only assist when they ask for help, but don’t just jump right in, first give hints.)

If you are a programmer or have tried to learn programming then you will probably agree that there is a lot of dry material sometimes very abstract. When teaching kids you have to keep things interesting. Many kids think they are going to write the next World of Warcraft or some other completely complex software. Encourage them to dream big. Tell them that it is a great goal and that there are a lot of things they will need to learn to accomplish such a feat. Let them know that they will need to practice on smaller ideas like a calculator or number guessing game so that they can first learn HOW to program.

One thing that is important is to utilize the right teaching tools and techniques for the right age groups. It took me awhile to figure out that my 6 year old was not going to start programming Java right out of the gate. I then tried to teach him HTML, which he thought was interesting, but I realized that I couldn’t easily transfer what he was learning into an actual program. I decided that I had better do a little more research to figure out a good path for teaching my kids rather than taking random stabs in the dark and causing us both to get frustrated.

These are the resources that I found:

HourOfCode has a lot of fun activities and games that you and your kids can utilize to introduce programming concepts. The activities are divided into age groups from pre-readers all the way through high school age. They are designed to be about an hour long each sometimes with an additional follow along activity. I see these activities as great introductions to a lot of different opportunities but there really isn’t a lot of consistency from module to module and if you decide that you want to learn more about a specific language or technique there isn’t really a guide for that.

Scratch by MIT Media Lab is a free online tool and community that has a lot of great resources for teaching young children and older kids alike. A series of step by step tutorials is built directly into the Scratch ecosystem and you can share your ideas as well as see what others have shared. (seen below)

The interface allows you to drag and drop blocks. Many times with input boxes that represent various programming constructs such as conditional if/then statements, loops, variables, etc. This tool is fantastic for teaching kids how to think logically about how to accomplish certain tasks. The graphical user interface and fun cartoons make it interesting, easy, and fun for kids to learn about programming. If you browse through the community gallery you will find that you can accomplish pretty fantastic things like the game Pac Man, or Galaga. Scratch is a great tool for those that want the freedom to create endless ideas, but it will only last for a period of time. Eventually your child will outgrow the interest because Scratch can get bogged down and slow. They can write fun simple games, but they really want to write minecraft. When you see this happen you know thta it is time to start teaching them to program using the syntax of a coding language.

Code Combat is a tool that is a cross between a game and programming. Just like Scratch is great at teaching logic and problem solving using concepts like loops and conditionals, Code Combat expands on that knowledge by teaching about syntax. You are a player in a game where you have to solve different challenges. As you move through the game you earn new objects with different capabilities (commands). For example: the first item you get are boots which allow you to move left, right, up, or down. You have to enter the program and click run to see if you complete each challenge.

One day my oldest was at the point where he wanted to put everything together in his own program. I found the book “Javascript for Kids: A Playful Introduction to Programming” by Nick Morgan. There is a lot of good humor and the book teaches all of the fundamentals of JavaScript.

When he finishes that book he will pretty much be at the same point that many junior level developers are. I will likely introduce him to NodeJS followed by Angular 5. I am certain that he could work through the tutorials even now, but he is excited to finish the book because at the end he noticed that they have built the game snake.

I hope that you are able to get some information from this article that can enable you when teaching your kids (or your friend’s kids) how to program and at least an idea of what other things you can find to help them in their journey.

Good Luck !!

Machine Learning in Mobile Applications

By Tom Hartz

Machine learning is rapidly integrating into the consumer end-user experience. Netflix routinely recommends shows I may enjoy based on my viewing history. Snapchat’s filters are driven by complex facial recognition and evaluation algorithms. Facebook can identify which of my friends are in a photo with uncanny accuracy.

The potential impact of machine learning is not bound by industry or trade. Health care, agriculture, media / entertainment, household appliances, aerospace, and defense are all being explored and enhanced by machine learning.

The most common architectures found in today’s machine learning implementations are cloud-based. Voice assistants such as Siri and Alexa require connection to powerful servers, and use massive amounts of computational power and data streaming. However, a new trend is emerging to optimize machine learning algorithms and move the computation onto the mobile device.

The shift towards performing machine learning tasks on mobile devices has been heralded by Apple’s CoreML library for iOS 11, as well as Google’s release of TensorFlow Lite (supports both iOS and Android). Qualcomm is also making waves on this trend. Their Snapdragon processors are designed from the ground up to harness the power of machine learning on smartphones and tablets. They also offer an SDK for supported Android devices. Developers can now take advantage of these libraries and integrate AI into their mobile applications without needing to connect to the cloud.

With these and other recent advances, we are witnessing the beginning of an exciting new era of smartphone app intelligence. Machine learning is becoming more efficient and ubiquitous on mobile devices.

Mobile applications that perform machine learning on device are more reliable. When internet connection becomes intermittent or non-existent, these apps can still provide value to users and perform sophisticated tasks without relying on the cloud. This makes the applications much more dependable.

Many people have raised privacy concerns in this modern era of technological advancement with our newfound dependence on the internet. People today are concerned about issues like identity theft and mass surveillance. By performing machine learning tasks locally within mobile apps, data security stands to be greatly improved. All of the input data for a machine learning model can now be accessed and analyzed on the mobile device itself. Instead of the data being routed through various network switches and external servers, it can now remain safely on the device which greatly improves security.

As non-technical end users become aware of the potential for AI, consumer expectations are rapidly outpacing what the current AI / machine learning platforms are actually capable of delivering. For example, the release of the Apple HomePod was met with widespread criticism. Rather than praising its ability to analyze listening habits and provide suggestions for new music, many users criticised it for lacking the ability to call an Uber or differentiate between voices of different household members. Developers, UI/UX designers, project managers, and business leaders alike should all be cognizant of the rising user expectations, and strive to improve products and tools by exploring the possibilities of machine learning.

AWS Cloud Practitioner Certification

By Andrew May

Amazon Web Services (AWS) recently introduced a new Cloud Practitioner entry level certification, and I’ve just completed the free online training and the examination for this certification to try and determine who it might be useful to within our organization. I already hold the AWS Solutions Architect (Associate) certification but I’ve been trying to evaluate this from the perspective of someone relatively new to the platform.

Amazon recommends that candidates “have at least six months of experience with the AWS Cloud in any role, including technical, managerial, sales, purchasing, or financial”. In other words this certification claims to be for pretty much anyone. I was particularly interested in whether this training would be useful for our project managers, sales and leadership teams. Having 6 months of experience may make sense for those in technical roles, but for others I would think it’s useful to have some training before trying to manage or sell an AWS based cloud project.

The certification is broken down into these areas (from the Exam Guide): Cloud Concepts, Security, Technology, Billing and Pricing, with the largest portion of the questions being from the Technology area. The Technology questions are mostly testing whether you have a basic understanding of what different AWS services are and how they relate to different architectural principles; for example auto-scaling groups give your system elasticity.

There is free training available via the AWS Training portal that you can either create an account with directly (or use an existing Amazon account) or if you are part of the AWS Partner Network (APN) you can access training via the APN portal to ensure that your training is linked to your APN account. The “AWS Cloud Practitioner Essentials” course provides about 7 hours of video training.

While this training does cover most of the topics that are in the exam there are some problems with the content:

  • Some sections appear to be out of order - Application Load Balancers are covered before Classic Load Balancers with the Classic Load Balancers section containing the fundamentals of load balancing.
  • The Bonus Materials section contains some very well done videos on Virtual Private Cloud, Security Groups, NACLs, IAM and encryption. Unfortunately all of these videos are out of order which could make them confusing. They also go into far more technical depth than is required for the exam.
  • There’s a long section on the AWS Well-Architected Framework that would be more at home in the Solutions Architect training. It’s not clear if any of the exam questions are related to this (none of the questions I had were).
  • Most of the Core Services presentations start with an overview of a service and then show a demo of using it in the AWS console. These demos are less useful than a hands on lab for technical staff and not useful for non-technical staff.

The popular cloud training provider “A Cloud Guru” also has training for the Cloud Practitioner certification. I’ve not taken this training, but from reviewing the course outline and reading the forums this appears to take a more hands on approach, with labs where you set up a website.

Both these training courses highlight the main problem with the certification; it claims to be for everyone but the content appears to be an uneasy mix of technical and non-technical content. The exam will ask you about Edge locations (as used for CloudFront), but also expect you to have memorized what’s available in the different support levels.

I would be most comfortable recommending the certification for those in a project management role where understanding what services are available to better understand technical discussions will be useful, and they will also be involved in decisions about support and billing.

It may provide a useful introduction to AWS for technical team members who want to know more about AWS but aren’t ready to dive into the Developer or Solutions Architect training, but for most I’m not sure it’s worth spending the time or money to get the Cloud Practitioner certification.

For the sales and managerial teams I would instead recommend the AWS Business Professional Online accreditation that’s available and free to APN members. It gives a better overview of the value proposition for using AWS and the available services without going into unnecessary technical details.

Preventing the Hero

By Ed LeGault

I started my career as an intern in the ecommerce department at a fairly large company. After a few days around the office I started to notice a guy named Joe. I didn’t notice him because of his appearance or his car. I noticed him because he was the guy that everyone, and everything, went to. Production problems went to Joe. Performance questions went to Joe. Code reviews, schedule questions, testing questions…. you guessed it…went to Joe. He seemed to work 24 hours a day and was always frazzled or tired looking. His desk was a mess and it seemed like he was surrounded by monitors like he was in charge of his own command center. I remember thinking “wow, I want to be that guy someday”. I also remember thinking “man, if Joe ever gets hit by a bus we are screwed”.

One way to know you are on the right track in your DevOps journey is to prevent or eliminate the hero, aka. Joe. My favorite quote on this topic is “A culture that rewards firefighting breeds arsonists”. In other words if someone keeps getting rewarded for putting out fires there is going to be resistance to implementing processes and procedures that eliminate those fires. This could be a painful process because in most cases the person that has been in hero mode is going to not accept that they need to automate or eliminate themselves.

How do you eliminate the need for a hero? The first step is identifying who or what your hero is. You might not have a hero that is one person but instead a group or department. Once identified, find ways to optimize the flow of work through that person or department. That could mean spreading the hero person’s work amongst other people and/or facilitate training for the things only they know. It could mean studying what is done manually and find ways to automate those things. It could also mean finding useless steps in the process and eliminating them. Once you optimize the flow of work you will see the need for a hero diminish or become non-existent.

So, what happened to Joe? He became so confident that the department couldn’t function without him that he gave them an ultimatum demanding a raise. The department decided to use his threat as a way to eliminate their dependency on him and let him leave. This was more of the “rip the band aid off” approach to finding ways to optimize the bottleneck. Since there were no culture, procedural or systematic changes in the wake of him leaving, another person stepped in and became the hero. He then left and another person stepped in and you can probably guess how it kept going from there. Preventing the hero, aka optimizing the bottleneck, requires a change in culture and a willingness to have everyone check their ego at the door. When there are no more tights and capes around the office then you know you are on the right track on your DevOps journey.

Agile Testing - 4 Steps to Success

By Dennis Whalen

Agile development allows a team to incrementally and continuously deliver value to their client.  To quickly deliver this new functionality and maintain confidence in the existing features, the team needs to have quick and continuous testing feedback.

The old strategy of getting QA involved for some manual testing after the coding is complete just does not work in an agile environment.  Even on some “agile” projects, QA can be more of an afterthought than a first-class member of the team.  In addition, sometimes we see QA that is primarily manual testing, with little to no automation.

On projects like this, thoroughly testing the application with each change is impossible to do quickly, so we do a handful of tests and hope for the best.  As bugs start slipping through to production, confidence in the application fades and there is plenty of fear related to each deployment.

Let’s look at a typical flow you might see on one of these projects:

  1. The agile team is working to develop a new web-based ordering application for their client.  After talking to the Product Owner, the Business Analyst creates a new feature card.  The card details the requirement: “due to end-of-year inventory, no online order should be accepted during the last week of the year.”
  2. In sprint planning we find the Scrum Master has spoken to the Product Owner and has some more details - the ordering portion of the application should provide a generic "we're closed" message during the last week of the year.  The dates for the dark period should be stored in the database.  Based on these details, the developers estimate 5 points for the work.
  3. The feature is included in the sprint, a developer picks up the card, makes changes locally and does some testing.  The developer updates the card with detailed technical documentation about web services changes and database changes.  The card is then assigned to QA.
  4. A QA analyst picks up the card.  They remember this being discussed in sprint planning, but they’re not sure how to go about testing it.  The QA analyst decides to just ask the developer, since the card has a LOT of technical details, but nothing that clearly describes how to test.
  5. After consulting with the developer, it’s clear that the feature can’t easily be tested in the development environment, as it requires a database change to update the “ordering closed” dates.  Making database changes in development requires approval and DBA involvement, so the quickest way to get this testing done is for QA to manually test the change by pointing the the developer's laptop.  
  6. The developer updates the dates for "ordering closed" on their local machine to include today's date.  QA manually access the app and gets the “we're closed” page.  Since that’s all we need to test this, QA updates the card with their findings, including screen prints and all the steps that were included in the testing.  QA then assigns the card to the Product Owner for approval.

There are clearly some areas for improvement here.  When incorporating QA into an agile project, there are 4 areas to focus on immediately:

1 - Talk testing immediately

In the example above, QA did not really get involved with the feature until the development work was complete.  Acceptance Test Driven Development (ATDD) requires us to start talking about testing immediately, as part of building the product backlog.

The 3 amigos process is a collaborative discussion with representatives from the business, development, and QA.  The unique perspective of these 3 groups can help clearly define the work to be completed.

As we discuss features to be added to the application, we define the requirements by describing how we will test the feature.  The product owner, developers, and QA work together to define acceptance criteria with clear examples before estimating or starting development.

We typically define these requirements with gherkin, a business readable language that defines how to user interacts with the application and how the application responds.  These are the concrete examples we use to define the requirements.  For example, one scenario we could use to test the feature might look like this:

Scenario: Ordering during an inventory period
Given the user logs on to the ordering application during a inventory period
When the user attempts to access the ordering page
Then the “we’re closed” page is displayed

2 - Automate your tests immediately

In our initial example, there was no automated testing.  If we “talk testing immediately”, we’ll have gherkin that can be used to automate the testing.  

There are a number of frameworks you can use to automate your user test scenarios.  Some examples include:

  • Ruby/Cucumber
  • Protractor
  • Java/CucumberJVM
  • .Net/Specflow

These tools all provide frameworks and best practices that can be used to automate the user scenarios.  Picking the best tool depends on many factors, including the technology stack of the system under test and the skills of the test automators and developers.

As part of automating the user stories, the QA automation process can also automate the setup of test data.  In some situations the automation code could simply read test data from the application under test and use it as test data.  In our example we have the step “Given the user logs on to the ordering application during a inventory period”.  The automation code behind this step would likely directly update the dark period dates prior to continuing with the test.

QA automation developers should build test components as the application features are being developed.  These components are code that will:

  • interact will the system under test, just as a user would (as described in the gherkin)
  • validate the system responds with the expected results (as described in the gherkin)
  • create reports that describe the results of the testing

3 - Include test automation in the CI pipeline immediately

Another hole in our sample flow above was the manual testing was performed on the developer’s machine.  Instead, we should test against an environment that was built via an automated process that is similar the the production deployment process.  That process will build the application from the source code that is checked into the configuration management system.

Once we have a process that does automated deployment to a test environment, we can include our automation testing as part of the deployment.  This immediate feedback will allow us to quickly find and stamp out any bugs, and to have more confidence in that quality of our application.

4 - Prominently display test results

Finally, once you have your automated tests running a part of the CI pipeline, it’s essential to make the reports visible to the team.  Fixing broken tests should be the number one priority of the agile team, and making the tests visible to all will help to increase the urgency of getting broken tests fixed.

Addressing these 4 areas as soon as possible will allow QA to provide value immediately by improving the quality of the application and giving the client and team confidence in the quality of their work.

Communicating Software Architecture

By Ned Bauerle

The ability to communicate technical software and architecture can be challenging. As a solutions architect, I am quite familiar with the difficulties involved with conveying technical software design ideas to both technical people (developers) and non-technical people including project managers, business analysts, or product owners.

I have discovered that technical people generally stop listening as soon as they think they understand what your are describing and begin to problem solve. “Squirrel”, “Shiney Object” … how many times have you witnessed it? You can never get a complete idea across due to this facet of most developers who have to be quick thinkers (I was like this early in my career). It is not their fault, it is just human nature and part of the job.

Non-technical people may never fully understand what you are describing especially if you assume that they understand some of the technical underpinnings for frameworks, 3rd party software, or even architectural patterns. In those cases you will likely never fully convey your awesome technical software solution which can address their business problem. Instead, you may create a state of confusion and sometimes skepticism will develop instead of trust. When that occurs you will have additional challenges throughout the project as you will likely have to describe each technical choice made during the project in elaborate detail. It can cause a great deal of wasted time.

I have come to find the best way to communicate technical ideas is to make use of diagrams backed by documentation. Representing complicated technical ideas in a diagram is a challenge, but keep reading and you will be amazed at some simple things you can do to improve your diagrams.

Information technology (IT) professions are in their infancy at only about 50 years old (computer science became a program of study around the year 1960). Other professions, like mechanical engineering and medicine, have been around for hundreds if not thousands of years. The IT community should consider “standing on the shoulders of giants” and leverage what others have already learned as opposed to rediscovering it for ourselves.

Let us take a look at how some of these more established professional domains communicate technical subject matter and learn from them. Here are two examples from domains other than information technology.

Here you see a description of the F-35 fighter jet, formerly known as the Joint Strike Fighter.

Different diagrams and descriptions can serve different audiences, each with different needs, but this diagram can be followed by most people.

The next diagram shows very technical information about the human heart. Doctors use this diagram to diagnose and interpret potential health risks. But they can use this same diagram to communicate with their patient who most likely does not have any medical training.

Getting the right information to the right stakeholders is crucial and you can see that very technical ideas can be conveyed in a simple, non-intimidating manner.

Now let’s look at a typical software architecture diagram ...

We are pretty good at capturing technical information and communicating with developers and other architects. We have established standard languages and tools for that level of communication, but those standards and tools are not always fit for non-techies who many times are the people that make important decisions which impact the architecture (like where to invest money).

We fail to describe the relevant information and leave out the rest. We fail to speak a language that they can understand. We fail to put them in a position where they have just enough information in order to make good decisions and investments which ultimately helps the team get the job done.

Creating Effective Architecture Diagrams

The goal should be to keep the diagrams simple and easy to understand how things relate. Let discussions and other documents provide deep level detail.

Did you know that your choices for color, shape, lines, etc. convey subconscious meanings that may make the difference as to whether people trust the diagram or even pay attention to it at all. Did you take art classes when you became a software architect? Are you aware of art psychology?

Why Understand Art Psychology?

Art psychology is the perception, cognition and characteristics of art. Different people have different reactions to the same piece of art and different interpretations. For example … what do you see in the picture to the below?

Did you see a saxophone player or a woman’s face? Look again and try to see the interpretation that you did not see the first time.

It is important to get at least a basic understanding of art psychology so that you don’t spend a lot of time creating your diagrams and then have them misinterpreted or ignored.

Pre-attentive Processing

Your initial interpretation of the image above is driven by a concept called pre-attentive processing which is the subconscious accumulation of information from the environment. All available information is pre-attentively processed. Then, the brain filters and processes what is important. Our eyes and brains are wired to perceive basic visual attributes of objects without any conscious effort, extremely fast and in parallel.

There are several aspects of pre-attentive processing that you should consider when creating your diagrams as shown in the picture below:

You will notice that you only have to glance at each grid of nine squares and be able to see which block or set of blocks is different. The lesson is: we should use these attributes (size, shape, line width, orientation, position, markings, enclosures, and color) explicitly to put focus on things that are most important in the diagram and worth our attention. This takes some practice and attention because many of us like things to be orderly and symmetrical.

You can find many other examples on the internet if you search for pre-attentive processing.

Color Theory

You probably noticed that color is one of the pre-attentive facets of a diagram, but you should also be aware that which color you choose plays a significant role. There is an field of study called color theory that centers around the emotions and feelings people experience when looking at certain colors. Take a look at the following chart to see how the color you choose can impact the emotions conveyed by your diagrams.

Have you ever noticed that most company logos are blue? Can you guess why? If you want to learn more about color theory you can search the internet for color-theory where you can find all kinds of information about contrasting colors or how groups of colors can also cause different responses.


You’ve heard it a thousand times “a picture is worth a thousand words” and it couldn’t be more true. Although it might seem sloppy, perhaps even childish to include icons and images in our documents, the effectiveness of pictures beats words in many cases. We are after all going for communicative effect with our diagrams.

Take a look at the following picture ...

On the left side of the picture are typical symbols used on our architecture diagrams (boxes, arrows, and words). On the right are images to convey the same ideas.

Which is easier to process? I rest my case.

Examples of effective architecture diagrams

There are many other concepts that can be applied to your diagrams and I leave that as a task for you. Once you get started looking deeper into the concepts mentioned in this article you will find yourself in a much bigger world of communicating with art. Applying just some of the concepts will help you improve your technical architecture by leaps and bounds.

Here is an example of what good architecture diagram looks like.

Cisco Systems architecture for wireless integration

The Modern Day Software Architect

By Ned Bauerle

The description of a software architect varies significantly in the IT (Internet Technology) industry. You may hear a colleague introduce themselves with a title like systems architect, senior architect, or even chief architect and you wonder to yourself what does that title mean and do I want it?

You are not alone !!

Many business leaders, human resource departments, even technical management share this job title confusion. There are seldom commonalities in how different company’s define the software architect’s role let alone how software architects are actually utilized within those organizations.

We get so accustomed to the busyness and pandemonium of our daily work, trying to keep up to date with new or updated technologies or projects which are behind schedule that there is seldom a moment of downtime. We never have an opportunity to stop and think about what the role of a software architect should mean in the broad landscape of software development?

Most professions have distinct role definitions and/or education paths. For example an electrical engineer is different than an electrician, a structural architect is different than a builder, a medical assistant is different than a doctor.

Why is there so much confusion in the IT profession architecture role and responsibilities?

To answer this question you have to realize that we are the PIONEERS of this field of study!!! The IT profession is in it’s infancy at only 50 years old. Computer science became a program of study around the year 1960. Electrical engineering, on the other hand, started in the 1880s and structural architecture was established circa 2100 BC.

The earliest of software architects would create detailed UML (Unified Modeling Language) diagrams including detailed class definitions. Typically these diagrams were passed to a development team for implementation. This process worked well for hardware or software that was manufactured or created and delivered on a CD, but changes would either require a new product or a new packaged delivery. The process was slow and methodical, typically following “waterfall-like” project methodologies.

In contrast, the onset of the internet allows for cheap delivery and quick updates to software including software that can be used in a web browser without the need for installation. Software can be developed and deployed so quickly that development teams can tackle small bug and feature releases simultaneously. Agile project methodologies paved the way for quick feedback cycles and smaller chunks of work. Creating concise UML diagrams was no longer necessary and nearly impossible to keep up to date with these small changes. Tools like Enterprise Architect by Sparx Systems were developed to automatically synchronize code and diagrams, but in most cases these types of diagrams were just not needed anymore.

So where does the software architect fit in now? Are architects still needed?

At the 2015 SATURN (Software Engineering Institute (SEI) Architecture Technology User Network Conference). Keynote speaker Gregor Hohpe stated “There is always an architecture; if you don't choose then one is assigned.” He also mentioned that if you do not plan the architecture before embarking on the solution then the architecture you get will most likely not be the one you want.

Today we find that there is confusion about what differentiates an architect from a lead developer as the architect in many cases is either eliminated as a position or integrated with the development team as a lead developer. We have fallen into a pattern where most projects lack a plan (diagram) of the overall architecture that can be used to communicate choices in development. Developers many times focus only on the task(s) at hand (or in sprint) with little thought about how the task(s) fit into the big picture of what is being developed.

We still need software architects!! We just need to rediscover how the software architect fits into project teams and organizations.

While in the “agile” world the desire to create highly functional teams that are empowered to influence the project there is still a need to create a high level technical roadmap in order to communicate progress throughout our project teams. Without it we run the risk of creating a reputation of recommending work or technologies that executives may not fully understand. In those cases teams get pushback because to them it sounds like it might be work that is not necessary.

We need our software architects integrated with the development team where they can evaluate and adjust to changes in business concerns throughout the process. The architect should be involved early in the process to put together a few meaningful (high level) diagrams which can be used for communication both to management teams as well as development teams.

The size of the project, team, and organization all play factors in how the architecture will be created and maintained. Large organizations are starting to restructure to utilize architecture practices that keep a cohesive technical vision for the company which is then used to advise and unify development teams. Small businesses or projects might not have the capacity to create a separate architecture group so the team may include an architect or conduct mindful architecture sessions throughout the development cycle to create and maintain the architecture.

Traits of a Good Architect

Due to the fact that most architecture roles do not presently have college programs and tools (like CAD for structural architects) we must utilize individuals who have deep experiences and expertise with software development such as the following:

Experienced in Computer Programming - A good architect has been through the good and bad implementations and can use this information to plan out decisions based on experience.

Possess an Analytical Nature - A good architect never stops learning. Technology advances so quickly that in order for an architect to be effective they must keep up with the possibilities.

A Good Communicator - Architects generally need to be able to communicate with both technical and non-technical individuals both verbally and visually through diagrams.

Good at Estimation - Architects should be able to scope out and estimate a project typically before implementation has started.

Both a Leader & a Team player - Most of the planning is done up front. An architect needs to be able to allow the team to influence the choices during implementation and be decisive at times.

Technical Facilitator - During the implementation of a project an architect must remain involved and typically will facilitate communication between the development team, project management, and product owners so that everyone can understand the state of the project.

Publications & Research

To truly advance the profession of software architecture we need to find ways to convey both successful and ineffective solutions for the common problems we solve.

Currently there are hundreds if not thousands of projects in progress who are re-creating solutions that have been successful on other projects, perhaps even experiencing the same failed ideas along the way. If we spend our time re-doing the same work over and over rather than sharing our efforts we will advance very slowly indeed.

At this point you may ask “What can I do about it?” and I say to you …
Start blogging,
Start mentoring,
Share your knowledge however possible.

Publish articles or share patterns if you are not able to publish proprietary code.



QA Automation Strategies for ETL

By Dennis Whalen

I recently started a new assignment as the QA lead on a project team that is building an ETL (Extract, Transform, and Load) application.

ETL processes are all about moving data, and do not typically have a user interface. I was excited about a new challenge, but my experience with QA automation has been focused on user-facing applications.

Behavior-driven Development

On past project teams, we have utilized Behavior-driven Development (BDD) to drive the development of user-facing applications. A couple key components of BDD include:

  • creating user scenarios that describe desired application behavior
  • using those scenarios as the basis for development and automated testing

Typically, a team will develop user scenarios by utilizing the 3 amigos perspectives to help clearly define the desired application behavior. The unique perspectives of the product owner, developer, and tester allow us to define application requirements that address business needs and have the necessary detail to build and test.

A user scenario is a concrete example of the application’s desired behavior. Scenarios are written in a business readable language called gherkin. For example, here’s a sample scenario for a Google search:

Scenario: Basic Google search
Given a web browser is at the Google home page
When the user enters a search request for "Leading EDJE"
Then links related to "Leading EDJE" are shown on the results page

The goal of a scenario is to [1] define any preconditions for the test, [2] describe the user action, and [3] define the expected results from the user action. This is similar to Arrange-Act-Assert with until testing:

  • Given – any required setup
  • When – the user interaction we are testing
  • Then – the expected results

Once a scenario is finalized and pulled into a sprint, a developer and tester work together to build and test it. The tester will also build automation components that will be included in the continuous integration (CI) process.

But wait, aren’t we talking about ETL?

This all works fine for an application development effort with a user interface, but I am working on a project that does not have a user interface at all.

Regardless, I still want requirements to be written in a common language that everyone understands and I want to be able to build test automation components based on these requirements.

Can we really define a user scenario when there is no user interface? I think we can.

Scenarios in action

One of the first new ETL processes we looked at was to receive a file of deactivated Stores from a third party. Per the Product Owner, the ETL process needed to validate the filename, apply the contents of the file to the database, and create an Excel file that summarizes the activity. Even without a user, we can write a gherkin user scenario for this:

Scenario: Process a valid deleted Store file
Given a valid deactivated Store file has been provided
When the "Process Deactivated Store" ETL process is run
Then the "Input" folder is empty
And the "Completed" folder contains the file that was processed
And the data in the file has been applied to the database
And the Excel output matches the content of the input file

As usual, the Given statement(s) will do any setup that is required. In this example, we will put the input file in the appropriate location prior to running the test.

The When statement is the action we want to test. The ETL process is what we are testing, as described in the When statement of the scenario.

Finally, the Then statement(s) will described the expected results.

Page Object Design Pattern

One key design pattern for building the automation components in a UI-based testing framework is the Page Object pattern. With this pattern, a class is created for each distinct page or view in the application. With UI tests, the page object encapsulates page locators and page specific logic in a single location. Global page locators and interactions can be further encapsulated in a base Page Object.

We are leveraging this concept with ETL testing. Our initial plan is to develop a ETL Object for each ETL process. As we identity common patterns within the ETL processes we can further encapsulate common methods in a base ETL Object.

So many tools!

There is a vast array of tools that support automated testing. Some examples include:

Our application is architected using the Microsoft stack, with SQL Server Integration Services and SQL Server Reporting Services as the backbone of our solution.

The key tools selected for our test automation are C#, SpecFlow, and Selenium, as they in the same technology stack as the application. Using the same tools for development and test automation allows us to be agile with resource allocation, as the same team members can do development and test automation.


Leveraging BDD with an ETL application development effort has provided a number of benefits:

  • Common understanding - the scenarios are written in common language understood by all.
  • Focused scope - the scenarios describe what the application needs to do, and allows the developer’s work to be driven by the user’s needs.
  • Quick feedback - the scenarios can be automated, which allow them to be included in the CI process, providing immediate notification if something breaks.
  • Quality and agility – successful automated testing gives stakeholders confidence in the application and less fear of application changes.
  • Living documentation – as the application morphs and grows, the scenarios will be kept up-to-date to ensure tests keep passing.

Maybe we do have a user?

Finally, as we built user scenarios for our ETL process it became clear that we DO have a user. The user is the external actor taking action on our application. Sometimes that’s a human user, but in this case the “user” is the batch scheduler requesting the ETL process to be run.

Using BDD and user scenarios to drive application development and automated testing with an ETL process is yielding the same benefits we see with the typical user facing applications.

So you want to build a chat bot...

By Jason Conklin

There are countless articles popping up about how chatbots will replace web sites and change the way people interact with your brand or product. But you need to answer some important questions before diving into the bot world.

What should my bot be able to do?

Real world bots are doing things like answering basic questions (FAQs), ordering pizza, schedule meetings, updating task lists, and booking hotel rooms. While repeatable task automation will see continued bot growth, creating a great bot isn’t always that easy. A bot is only as good as the service it exposes.

Make a list of features you think your bot should offer. How are these features different then using an existing website or app? Will these features bring unexpected customer delight? (hint: they should!)

Where will my bot be located?

A bot can be built for many different channels. It may be hosted on your website, inside Facebook Messenger, Skype, or Slack. The bot could communicate over SMS or E-mail. It could be a voice assistant running on Alexa, or Google Home.

Knowing the channel where your bot will live has a direct impact on what type of content you can be provide. In a visual channel, like the web, a bot can send images and videos, but in a voice channel it will be limited to speech. Some channels will limit the interaction with the user to one session. For example, if the chat is directly on a web page you may have a hard time reaching the user after the chat ends since they may have left your site. However, in a messenger application you can send reminders and notifications long after the original conversation has ended.

A major factor in picking the right channel is determining where users are looking for you. If you have a strong social media following, that channel may be easier to start in. If no one is following you on Facebook, you may have a hard time launching a FB Messenger bot.

How will users interact with my bot?

Think back to the last time you called an automated phone system. You were presented with a menu of options and drilled through a massive tree with no idea what was next. Don’t put your users through that same pain!

Don’t try to be a fake human. If the bot announces itself as a machine, and not a human, the user is more likely to forgive it for not knowing an answer. This also gives you the opportunity to create brand focused personality for your bot.

Users will expect your bot to be a bit more open ended than a traditional web page or input form. Don’t force users into a box or restricted flow. Provide help when needed and allow the user to jump around. The flow of your bot should feel like a conversation and not an interrogation.

How do I design a conversational bot?

In the world of conversational based bots, the user will be talking in their own terms, the bot is expected to figure out what they mean and reply. The bot’s response should also guide the user on what to do next. This could be another question or a list of options. If the user feels stuck or doesn’t know what to do next they will likely leave and not return.

Designing the conversation will likely be your most challenging task. You will need to think about common phrases a user might say and the responses a bot will return. Think about all the different ways the user utters the same sentence. Write down the intent of each of these utterances. Start to group utterances with intents and create bot replies for each of them.

Don’t use terms that are internal to your business, you will need to write the conversation based on how a user talks.

Get two different color sticky notes. Pick a color for the user and a color for the bot. Write out a few different dialog flows between users and bot. Is the dialog too completed? Was the user able to easily accomplish their goal? If you answered no, go back and rewrite the conversation. If you have a hard time drawing the conversation, the user is going to have a hard time too.

Don’t forget to include ways for the user to change their mind and how you would handle unexpected user input. Be on the lookout for areas where users may get stuck. Provide them with a way to start over or transfer to a live person if appropriate.

Where do I go from here?

Now that you defined your feature set, picked a channel, and wrote the dialogs, you should be on your way to building and release your bot!

You should research and interact with other bots. Take note of bots that do things well and times where they struggle. Find a few bots in your target industry and think of ways you would improve them.

Check out these additional resources:

Angular. Is it just a name?

By Bob Fornal

Angular naming conventions ... we now have Angular, Angular JS, Angular 1, Angular 2, and Angular 4, and more to come. To say the least, conversations with coworkers and clients about angular in general has become more challenging. Then, when I had a chance to listen to the Angular Core team talking about Angular 4, they made it simple: Angular is Angular 2 and all versions moving forward while Angular JS is the reference to what we know today as Angular 1. This will be the convention I will be using within this article.

Google's Igor Minar said at the NG-BE 2016 Angular conference in Belgium that Google will jump from version 2 to version 4 so that the number of the upgrade correlates with the Angular version 4 router planned for usage with the release. Minar cautioned against getting too hung up on numbers and advised that the framework simply be called Angular anyway. Let's not call it AngularJS, let's not call it Angular 2, he said, because as we are releasing more and more of these versions, it's going to be super confusing for everybody.

When I had time to dig into Angular (2 at the time) on a real-world production project (in Ionic 2), I found the framework very easy to work with … things just made sense. A great many things came together on this project; what I would have estimated at two to four months of development time was done and ready for real-world testing in under five weeks, while I was learning TypeScript, Angular, and Ionic 2 (mobile development)!

Some of the questions I get asked when presenting on Ionic 2 and Angular relate to whether these frameworks are production ready. In my opinion, Angular is production ready, and while there will be changes and improvements that should be kept up with, the framework is solid. The Google team is working at a fast pace, but they are generating framework code that is being used in production environments, this is the logical conclusion of a framework growing from a solid base. Ionic 2 is no longer in beta, but I would be hesitant to use it as production ready code, unless there is minimal use of device specific functionality. Both frameworks are great for generating Proof-of-Concept code.

Since working with Ionic 2, I have had a chance to listen to talks about React Native (still in the early stages of development) and am interested in learning more about this framework.

Now, on to what I learned from that initial project ...

Working in a framework that does not encourage two-way data binding was unusual at first, but a simple pattern to follow. I learned about Angular Modules, Components, then Templates and Metadata. Metadata is wrapped in a Decorator identifying the type of class being defined, the Metadata provides specific information about the class. When designing the Templates, I found that data binding brought a whole new level to what Angular was able to do.

Data Binding is a mechanism for coordinating parts of a template with parts of a Component.

Binding Example Description<
Interpolation {{ value }} Displays a property value (from Component to DOM)
Property Binding [property]=value Passes the value class=MsoNormal>(from Component to DOM)
Event Binding (event)=function(value) Calls a component method on an event (from DOM to Component)
Two-Way Binding [(ng-model)]=property This is an important form that combines property and event binding in a single notation, using the ngModel directive.


In my opinion, while this methodology is more readable and easier to follow, the best part of data binding is elimination of the need for $scope.$apply or $timeout within my code to handle changing data.

If there was a challenging part, it was learning about Observables and how they can be used effectively. While Observables are not necessary in all cases, I started writing them on my project to get familiar with what they could do, how they would impact code and development.

Having worked extensively with Promises, I know that they handle a single event when an async operation completes or fails. There is more to it than that, but this gives us a starting point when discussing Observables. Both Promises and Observables provide us with abstractions that help deal with the asynchronous nature of our applications.

An Observable allows us to pass zero or more events where the callback is called for each event. Often Observable is preferred over Promise because it provides many of the features of Promise, and more. It does not matter if you want to handle zero, one, or multiple events. You can utilize the same API in both cases.

Observables can also be cancelled, which is an advantage over Promises in most cases. If the result of an HTTP request to a server or some other expensive async operation is not needed anymore, the subscription to an Observable allows the developer to cancel the subscription, while a Promise will eventually call the success or fail callback even when you do not need the notification or the result it provides.

Observables also provide operators like map, forEach, reduce, ... similar to an array.

Suppose that you are building a search function that should instantly show you results as you type. This sounds familiar, but there are a lot of challenges that come with this task. I have seen a lot of creative code over the years to handle the issues that have to be handled here.

  • We do not want to hit the server endpoint every time user presses a key. Basically, we only want to hit it once the user has stopped typing instead of with every keystroke.
  • We also do not want to hit the search endpoint with the same query params for subsequent requests.
  • We also need to deal with out-of-order responses. When we have multiple requests in-flight at the same time we must account for cases where they come back in unexpected order. Imagine we first type computer, stop, a request goes out, we type car, stop, a request goes out. Now we have two requests in-flight. Unfortunately, the request that carries the results for computer comes back after the request that carries the results for car.

Observables make handling these cases easy. In fact, this is one of the primary examples for using Observables at this time.

What I have learned:

  • The naming conventions used by Google will take some time to sink in.
  • Angular is a truly robust framework, production ready.
  • The learning curve when using Angular is minimal since most of the framework is intuitive.
  • Many of the hassles of Angular JS are reworked in a rich way
  • Ionic 2, based on Angular, while fun, is not robust enough at this time for production
  • React Native may be an intriguing solution for mobile development.
  • Data-binding and Observables have come a long way and can take away much developer pain.


What you don't know about the String could hurt you

By Dave Michels

Application development involves a great deal of character data. Almost every business application written today has a great deal of string management performed within it. Not just in the business data that is processed, but also with the strings that are managed or maintained by the application for aspects like labels or managing a dictionary for spell check. All of this string data consumes memory, and potentially a lot of it.

One of the ways that modern development languages have evolved to optimize their runtimes is the concept of “interning” or “pooling” strings. Many runtimes like the JVM and CLR every string is immutable. Meaning, once a string is created, it cannot be changed. This is done in order to allow strings to be “pooled” and have multiple objects point to the same string if it exists in the internal pool. This is why practices such as regular string concatenation within a loop is inefficient as opposed to using a class like StringBuilder. Strings are concatenated a new string is created and pooled with each concatenation. These implementations for immutable strings are based primarily on efficiencies for the runtime. By taking these strings and caching them in memory there can be multiple different references to the same string throughout a program when a string is created. This has a great deal of benefit from an optimization perspective.

However, these cached strings held in memory of the runtime process and that is a bad thing when it comes to sensitive data such as passwords or social security numbers. The moment a string is created it is “interned” and added to the string pool for the runtime environment. What this means is that for individuals that have nefarious intents, memory scraping or dumping a process’s core can result in a plethora of valuable data that you may not realize this hanging around in your application. To illustrate I’ve created a simple Java program and forced it to dump core using the kill command.

I setup my terminal to allow core dumps on my mac:



I’ll run my basic Java app:

Run Application


It’s easy to find running JVM apps via a simple process list command:

Find JVM


Then send a signal to the process to have it dump the core:

Find JVM


We can see the core dump in the /cores directory (in the default location for BSD/macOS):



Now it’s just a matter of running the “strings” or “gstrings” using the Homebrew binutils package and redirecting the output to a file (jvm-coredump-strings.txt in this case):



Obviously, looking at these files, they are quite large – 6 GB for the core file and 127MB for the text file. However, they don’t need to be around long enough to make them conspicuous the core file can be removed immediately after creation leaving the much smaller text file. Even then the text file needs to be around just long enough to be moved off the machine or have it combed through for any interesting data. I realize this is the brute force way of illustrating this approach. I realize arguments can be made regarding the security measures modern OS’s employ to prevent this type of thing. But the point is that this is an approach that can be taken to compromise your application’s data and people more clever and nefarious than I can find ways to exploit an approach such as this. By employing a couple common practices developers can lesson or negate this approach.

Whenever possible try to treat sensitive character data as character or byte arrays to allow for this data to be garbage collected and easily overwritten. The .NET framework has a SecureString to deal with just this type of sensitive character data. This can effectively eliminate the resident sensitive data from memory. Also, beware of sensitive data being transformed into strings by convenience frameworks by calling toString() or ToString() in .NET. Serialization frameworks are notorious for this (transforming to JSON or XML this is done quite frequently). Sometimes this is unavoidable, but you can help obfuscate this by not assigning this data to local member attributes of classes with names such as “Password” or something of the like that can be easily searched.

We dev’s deal with strings all day every day. It’s the most intuitive way to transmit, debug, and store application data. However, with system security breaches becoming ever more prevalent, we should always keep in mind the sensitivity of the data that we are using within our applications and exactly how the runtime manages it.

Analysts - Tools and Traits

By Brett Gerke

The purpose of this article is to describe tools and skills desired for Analysts. If you have a position as an analyst or want a position of an analyst, you will want to read on.

First, why do we make distinction between BAs (Business Analysts), BSAs (Business SYSTEMS Analysts), RAs (Requirements Analysts), or SAs (Solutions Analysts or Systems Analysts)? The job appears to be the same, names appear to be altered to reflect: attitudes; business politics; focus; or (in my opinion) provide confusion and ambiguity. Here, the overarching term 'analyst' will be used. To further level set, the term 'customer' is the person, organization, or other that will be using the -product we are producing or enhancing. They can be called 'Product Owners', or Users.

Basic Analyst Tools

Some skills and tools for analysts are basic. They can be learned from taking classes, IIBA certification, on-the-job training or by reading the plethora of books on the subject. There are two tools you must start with. Consider these the minimum set of expectations. The first tool is elicitation, finding out what the customer wants / needs. The basics are simple: Most customers want to do THEIR JOB, More, Better, Faster. Beyond this, a more honed idea of their needs / wants are what elicitation is all about. Here you deal with the YEABUTs (Yea, I do want that, but...), and the I KNOW IT WHEN I SEE ITs (Just show me what you think I need and I'll tell you if that's what I want). It is important not to give up, press on to understanding. To do this you will need some of the skills described later.

Articulation of specifications and requirements - produce artifacts that will communicate to the customer, and the technical team, the needs / wants for the system. In the analyst world, there are formats and conventions for producing artifacts. These are forms of communication using visuals, audio, and demonstrations. Some examples: Story Cards; Actor Catalogs; Use Cases; Process Flows. For more of the skills that you can learn to do your job, please refer to classes, IIBAs BABOK, on-the-job training and books.

Basic Soft Skills

There are experiential skills to be learned and honed. They can't be learned from classes or books (no matter what the author says). These are 'soft skills'.

As an analyst, a big part of your job is to build relationships. These relationships are critical to your success. Failure to form relationships (or destroy relationships) can sound a death toll on your job and possibly your career. Everyone you come in contact should be opportunities to form relationships. Here are several soft skills you need to examine and improve. Political skills - knowing relationships that exist, the nature of these relations and the reasons they exist. Understanding if and where YOU fit into relationships is key. Many organizations have a hierarchy that should be appreciated. Titles, functions, seniorities are all important to forming, nurturing, and understanding your relationship within an organization. Public speaking is another soft skill to pay attention to. Your job requires that you are able to clearly present opinions, facts, and theories to your team and outside your team. Being able to put a presentation together and effectively communicate verbally is a necessity. Relationships can be started or furthered by an effective presentation.

The ability to communicate through visuals is a great skill to have. This is not difficult if you do some homework and expose yourself to other's visualization techniques. Then copy them! Don't feel you are required to innovate. The important idea here is to create visualization that 'speaks to the audience', 'tells the story'. In other words provides not just information, but the context of the information. A great series of books on this subject are by Edward Tuffte, get them from your library.

One skill that shouldn't need to be listed, but is the Achilles Heal of many analysts -- listening skills. Learn to not only hear what is being said, but understand what is said. Stay off your cell phone, email or other distractions. Be engaged when talking to people. React to what you heard, ask questions, summarize, one suggestion is to use the term '...let me teach back what I understand...'.

As a final point, don’t take things personally. Understand that people are not perfect, if you do your best at forming relationships, and it fails, count your wins and learn from the rest. In other words, have a 'Thick Skin'.

Skills that separate you from the pack

There are other experiential tools / skills that can be real game changers. Most analysts that acquired these have built real success patterns. Once again, these cannot be taught, or acquired from books. You must use them, concentrate on them, and be critical of yourself to sharpen these skills.

Being able to argue is NOT the same as being able to persuade. Persuasion is an art, and depending on the person and the situation an take on several forms. An analyst that is skilled at persuasion can quickly become a change agent, and a leader. Persuasion allows you to 'argue' without potentially offending your audience.

Flexibility is a critical skill to build. To be able to bend without compromising your beliefs, ethics, and values, makes you a person that can build relationships with the most difficult people. The first step of building flexibility is to have a good grasp on your beliefs, ethics, and values. The second step is to understand how others differ from you. Try to empathize with them, possibly sympathize with them. Lastly, find a way to fit into their version of the world while keeping your integrity. HONESTY, this is in capital letters because it may be the most key skill everyone should have. The difficulty comes when you are around others that are less than honest.

When writing a dissertation, the academic advisor gave the following advice, “Remember, all we have are our words. Make them count”. This is great advice for all writing and speaking. In the profession of Analyst, this is not only great advice, it is a requirement. Good communications goes well beyond the ability to write, use standard formats, create great visualizations, and good grammar. The tools / skills must involve using the correct LANGUAGE to communicate ideas. Analysts must be able to understand and use the correct words. There are three important languages analysts should concentrate: Language of the organization; Language of the domain; and vernaculars.

Every organization has their own way to express certain ideas. Some organizations use abbreviations to communicate, this language is a way to articulate 'tribal knowledge' (internal knowledge held by those within the organization). Some terms used by organizations are vernaculars. For example: when discussing the moving of a vehicle, one organization uses the vernacular ‘traction' and another uses 'travel'. When discussing a heart condition one healthcare organization may describe the condition as 'AFib' another uses 'atrial fibrillation' for the same condition.

Learning to 'speak' the organization's language helps build relationships within the organization. There is one great way to learn the language of the organization, listen. Listen in meetings, ask questions when you don't understand, and then USE the organization's language to communicate within the organization. Do your best to fit in and you will improve your communications.

Effective communication also requires a knowledge of 'Language of the Domain' the organization identifies. There are standard languages that hold a uniform meaning within a domain. It allows effective communications between organizations within the domain. We don't need to go very far to get examples for Language of the Domain. Analysts use words that are accepted by all analysts to mean something. Use Cases, Requirements, Stories each express a commonly understood concept.

One of the 'must haves' for an analyst is 'Critical Thinking Skills'. The interesting thing is that this is a commonly misunderstood skill. Let's start with the skill that is often confused with critical thinking - analytical thinking. Analytical thinking is the ability to break complexity into the component parts. Critical thinking is the ability to further learning by questioning EVERYTHING. Critical thinking starts as a young child that constantly asks... 'WHY?' Look for proof of ideas, concepts, and statements of fact. Here is a bullet point list of several components of critical thinking.

  • Understand links between ideas
  • Determine the importance and relevance of arguments and ideas
  • Recognize, build and appraise arguments
  • Identify inconsistencies and errors in reasoning
  • Approach problems in a consistent and systematic way
  • Reflect on the justification of own assumptions, beliefs and values

Use this list to start improving your Critical Thinking Skills.


Always be open to grow…never stop growing and learning.

Intro to Augmented Reality

By Tom Hartz



1. make (something) greater by adding to it; increase.


Augmented Reality is an evolving set of technologies with the potential to improve our lives in a variety of ways. To define it succinctly, AR is the rendering of digital information superimposed onto one’s view of the physical world. You’ve seen it before; a prime example being the down markers on football TV broadcasts.

down markers


Why care about AR?

Until recently, the prospect of seamlessly blending the physical and digital anywhere you want has remained in the realm of Science Fiction. AR has in fact existed in various forms dating back to the 1960s, but none of the implementations of the past have been portable or very practical for consumer use. However, we are now witnessing this technology become mainstream, due primarily to the proliferation of mobile hardware. The successive iterations of the smart phone market have driven us towards having a compact, low cost, powerful set of sensors and display residing in nearly everyone’s pocket. We are at an unprecedented level of hardware saturation, enabling some really compelling AR applications, and we haven’t even seen the endgame yet. Will wearables replace all our smartphones? What does the next generation of mobile computing look like?


Unwinding the future possibilities is endless and entertaining, but I digress. Right now in the current mobile app landscape, AR is getting big! Lots of cool apps exist today that utilize computer vision and tracking algorithms to do all manner of neat things. If you haven’t seen these, take a few moments to check out some links:

  1. IKEA Catalog App - place and view furniture pieces in your living room
  2. Snapchat - face tracking with fun meme-ery
  3. HoloLens - featured at the 2016 Build conference running a Vuforia app

Just to name a few standout examples. While there are many apps already applying this technology, there is still plenty of room left for innovation and creative new ideas.


Diving into AR Development

I first became interested in AR when I attended the M3 Conference a few years ago, and heard a keynote presentation from Trak Lord of Metaio. I was inspired, so I looked around and found a plethora of platform options for building AR apps. From my own research, I can assert that the Vuforia SDK has the easiest learning curve today. I have used this toolkit to build one demo application for a paying client, a few internal company prototype apps, and many just-for-fun personal projects as well.

Looking back I am glad I didn’t invest much time learning the Metaio SDK. They were acquired by Apple in 2015, and have since shut down all public support. Apple has been very quiet about the acquisition, not releasing any news about what they are doing with the technology. Clearly they are looking to innovate in the AR space and are doing some internal R&D right now. Personally, I am excited to see what they come up with, and wonder what built in AR features the next iPhone may have!


Vuforia History

Vuforia began as an in-house R&D project at Qualcomm, a telecommunication and chip-making company. At the time, the company was looking for computationally intensive apps to showcase the prowess of their Snapdragon mobile processors. Nothing flashy enough for them existed on the app market, so they decided to push the boundary and create some new software on their own. They built the Vuforia base SDK and launched it as an open source extension of the AR Toolkit.

Since the initial inception, Qualcomm augmented their base SDK with a variety of tracking enhancements and other proprietary features. To sustain the project long term, they migrated away from the open source model, and they eventually sold ownership of the library to software company PTC. Unlike the sale of Metaio’s SDK to Apple, this transfer kept support very much alive for its development community. Since then, Vuforia has grown to be (one of) the premier Augmented Reality SDKs, used by hundreds of thousands of developers worldwide.


Using the Tools

Apps using the Vuforia framework today require a license key. Deployment licensing options start at a reasonably low price, and prototype “Starter” apps can be developed free of charge! You can create an unlimited number of prototype applications at no cost via their Developer Portal.

Building custom AR experiences necessitates thinking in 3D, and having great tools goes a long way in easing that burden of complexity. The Unity 3D game engine is a very intuitive environment for editing scenes in 3D, and it’s scripting engine uses C#, making it a fantastic choice for developers who are versed in .NET. To me, the best part of the Vuforia SDK is the Unity plug-in. It enables you to build AR applications, without writing any code at all, that can run on pretty much any mobile phone or tablet.

Putting together a marker-based AR app is incredibly easy with these tools. If you have no experience working in Unity, there will be a learning curve involved. A good primer for understanding the Project, Hierarchy, Scene, and Inspector panels can be found here. Once you are familiar with the tools, building AR apps is easy and a lot of fun! Below is a short list of steps to exemplify how quickly you can get an AR app up and running on a webcam enabled machine. Not included here are the steps for deployment to a mobile device (a topic for another day).

  1. Create New Project in Unity (use 3D settings).
  2. Delete Main Camera from the scene.
  3. Import Vuforia Unity Package (downloaded from the Developer Portal).
  4. Import target database (downloaded from the Developer Portal).
  5. Add two prefabs to the scene from Vufora Assets: ARCamera and ImageTarget.
  6. Select ARCamera in the scene hierarchy. In the Inspector Panel, paste in the App Key (created via Developer Portal), then Load and Activate image target database (two checkboxes).
  7. Select ImageTarget in the scene hierarchy. In the Inspector Panel, select from the dropdowns for Database and Image Target (stones).
  8. Import a 3D model asset to the project (drag and drop from file system into Unity).
  9. Add the model asset to the scene as a child object of the Image Target.
  10. Center and resize model as needed to cover the Image Target.


You can download my completed example Unity project from Github


AR Tom

Custom Authorization Filters in ASP.NET Web API

By Chad Young

The ASP.NET Web API framework is a great choice for those that want a lightweight Service Oriented Architecture (SOA) to facilitate passing XML, JSON, BSON, and form-urlencoded data back and forth with a client application. Inevitably, you’ll need to secure at least some of the endpoints.

At a minimum you’ll need to have some sort of Authentication and Authorization mechanism in place.

  • Authentication: The process of confirming that user is who they say they are.
  • Authorization: The process of determining if the authenticated user has the proper roles/permissions to access a piece of functionality.

In Web API, the message pipeline looks something like this:

code output


As the picture illustrates, you can handle authentication in 2 places. A host (IIS) HttpModule can handle authentication or you can write your own HttpMessageHandler. There are pros and cons to both but the main focus of this article is to discuss custom authorization filters that occur next in the pipeline after a user has been authenticated. Once authenticated by an HttpModule or a custom HttpMessageHandler an IPrincipal object has been set. This object represents both the user that authenticated and certain role membership information. Some applications have their own custom role and permission implementations which is where custom authorization attributes become useful.

Authorization filters are attributes that are used to decorate applications and can be applied at three different levels:

  1. Globally: In the WebApiConfig class you can add:
    				public static class WebApiConfig
    					public static void Register(HttpConfiguration config)
    						config.Filters.Add(new MyCustomAuthorizationAttribute());
  2. At the controller level:
    				public class AController {}
  3. Or at the endpoint level:
    				public class AController
    					public async Task<HttpResponseMessage> AnEndpoint() { return null;}

The code associated with each attribute gets executed in the same order as they are listed above, so you can nest functionality if need be. They are also inheritable so you can put an attribute on a base class and it will be inherited by any controller that inherits from it. The exception to this is the built in AllowAnonymous attribute which can be applied and will short circuit the need for authorization.

When the AuthorizeAttribute is encountered the public method OnAuthorization is executed. The base implementation is below:

				public override void OnAuthorization(HttpActionContext actionContext)
					if (actionContext == null)
						throw Error.ArgumentNull("actionContext");
					if (SkipAuthorization(actionContext))
					if (!IsAuthorized(actionContext))

As you can see an error is thrown if there is no action context. Then SkipAuthorization is called to see if an AllowAnonymous attribute is encountered and the authorization process should be skipped. Finally, IsAuthorized is called and if it fails the HandleUnauthorizedRequest is called. The overridable methods on this attribute are IsAuthorized, HandleUnauthorizedRequest and OnAuthorization. There are a couple of ways a solution could be implemented. You could override OnAuthorization and set the response message. The best solution to the scenarios I’ve run into is to override the IsAuthorized method and let the OnAuthorization method perform its base execution. I think the names of the methods also indicate that in the scenario of determining if a user is authorized that the IsAuthorized method makes more sense.

Below is the very high level code representing the custom filter implementation:

				public class MyCustomAuthorizationFilter : AuthorizeAttribute
					protected override bool IsAuthorized(HttpActionContext actionContext)
						if (!base.IsAuthorized(actionContext)) return false;
						var isAuthorized = true;
						//Do some work here to determine if the user has the correct permissions to
						//be authorized anywhere this attribute is used. Assume the username is how
						//you'd link back to a custom user permission scheme.
						var username = HttpContext.Current.User.Identity.Name;
						isAuthorized = username == "AValidUsername";
						return isAuthorized;

There are a couple of things that should be pointed out about the above code. First the attribute inherits from AuthorizeAttribute which is in the System.Web.Http namespace. The namespace System.Web.Mvc also contains an AuthorizeAttribute which has similar behavior for the MVC framework but they are not compatible. All of the magic happens in the overridden function IsAuthorized. Here you have access to the HttpActionContext where you will have access to the Request, request header values, ControllerContext, ModelState as well as the Response. Within the IsAuthorized function is where any work to decide if the user is authorized is done. If the user is not authorized (IsAuthorized returns false) the response message is set with (Unauthorized) and returned. If custom processing of the request is needed you could then override the HandleUnauthorizedRequest method in the case of an unauthorized user.

Proper use of these attributes can clean up your code so that authorization is a separate concern from the functionality of the endpoint itself. It also allows you to completely customize your role/permissions architecture as well.

Effectively Documenting a Development Project

By Keith Wedinger

Over the course of my 26+ year career as a software developer, I've had the opportunity to work on many software projects for several companies. More often than not, I joined a team that was working on a project that started some time ago. When joining an ongoing project, one of the challenges I often faced was getting up to speed and being productive as quickly as possible. What made this significantly more difficult was the lack of good development project documentation. So, how do we solve this? Ultimately, new members are going to be brought on to help with a project. Or in consulting, the project is nearly complete and it is time to transition ongoing development and support of the project to our client. This is where effective development project documentation is an essential tool.

So, what constitutes effective development project documentation? The goal is to get new team members or the client on-boarded and productive with the project as quickly and as painlessly as possible with minimal assistance from current team members. Depending upon the complexity of the project, a good onboarding measurement to shoot for is 1 day. So how does one achieve this goal? Effective project documentation must cover these essentials and it must be as specific as possible. Ambiguity is not a good thing. And strive to limit the choices where possible.

  • Include contact information for all team members and their key responsibilities. This lets everyone know who to contact with questions.
  • What operating system and version is needed? In most cases, development workstations will be provided to team members with the necessary OS.
  • If the project involves mobile development, what mobile devices are needed and where does one request/acquire the devices? Ideally, the necessary hardware will be readily available.
  • What SDKs and/or JDKs and versions are needed? Include download links.
  • What IDE and version is needed? Again, include a download link. If a license needs to be purchased or acquired before the IDE is downloaded and installed, clearly document how this is done. Ideally, the necessary licenses will already be purchased and ready to use.
  • How is the IDE configured to conform to project development standards? Every modern IDE has export functionality that allows settings to be exported and then, imported. Leverage this to make IDE configuration that conforms to the standards practically foolproof.
  • What version control system is being used and how is access to it requested? The popular choices are Git, Subversion and CVS. Include any contact information and/or instructions required to request access. Ideally, access to version control will be set up prior to onboarding.
  • What version control client software and version is needed? Remember those download links.
  • What build tool software and version is needed? Some IDEs like Xcode do not require separate build tool software. Some do. Examples include Ant, Maven and Gradle. Don't forget those download links. Also include any step by step instructions required to install the build tool software because nearly every build tool requires some OS specific configuration to be done.
  • What dependency management server is used and how is access to it requested? Build tools like Maven and Gradle depend upon a dependency management server to download the necessary dependencies to build a project. And most corporations concerned with software licensing host a dependency management server in house to control what libraries can be used to develop and build projects. Include any step by step instructions required to configure the IDE and/or build tools to use the dependency management server.
  • Once everything above installed and configured, how is the project pulled from source control, built, tested, installed/deployed and executed? Provide repeatable step-by-step instructions.
  • What standards and/or guidelines are followed when developing, testing, and committing changes?
  • Include an FAQ section that includes answers to commonly encountered questions or problems.


Over time, project changes will require changes to the project documentation. So, make sure that the project documentation is always kept up to date.


When teams spend the time to effectively document their project as described above, it enables effective and efficient onboarding of and knowledge transfer to new team members and clients which allows them to get to the business of being productive as quickly as possible.

Some Musings about Embedded Application Development

By Bill Churchill

With the ubiquity of ARM processors and *nix distributions running on them, embedded application development more closely resembles desktop or server application development. No longer is it mandated that an application be their own operating system. Anyone who has done application development in a *nix environment can develop for an embedded linux appliance. However, there are some significant differences that change the approach of development in these environments. Right after application design, hardware constraints dominate a developer’s thoughts.

An embedded device would have a limited number of processes running at any one time and each must be a good team player. No single application can monopolize any resource on the device. Using a smart phone as an example, if you have an application take up all the memory or processing cycles, the other applications will cease to perform in a responsive manner. You may not even be able to make calls or text until a reboot clears the problem.

Memory is a major limitation. In a PC environment, one typically has ample memory. Even when this is not the case, you still have the option of upgrading the memory. While it may be possible to run a memory managed vm like Java on your device, your performance can vary greatly. Typically, the choice of languages for a new application will be closer to the metal (C or C++). This allows finer grained control over memory allocation. Embedded applications usually grab any memory it would need at startup. This prevents out of memory exceptions or application halts for garbage collection (if a managed environment is used). These types of errors can hide during development and QA but rear its ugly head in the field.

While SD cards are growing in capacity, the root disk space is still very limited. Often the SD card is not used, as EPROM may be preferred. Small, tight libraries are extremely important in this type of development. The ubiquity of BusyBox on embedded devices illustrates this. Even within the application code, keeping things small and simple is important. Many systems load the entire root filesystem into memory. Another side benefit of using small libraries is the inherent small memory footprint when the application is loaded.

This is by no means an all inclusive list. Hopefully it will assist any developers looking to make the leap to embedded development. As phones become more sophisticated and wearables become more common, now is a good time to look at embedded application development for fun and/or profit.

Augmenting Architectures with a Service Proxy

By Trent Brown

You've committed to implementing a modular, service-oriented architecture (or in today's parlance a micro-service architecture). Business logic will be broken down into small, discrete components and exposed as REST services. The promise of scalable, flexible and adaptable software applications is at hand! While this approach to designing solutions has many of the advantages being hyped there is at least one downside; applications consuming all of these distributed services have to keep track of them. Services being consumed could be deployed across multiple servers in different data centers and increasingly in the cloud. One approach to taming the complexity for service consumers is the use of a service proxy.

The service proxy does much of what a traditional proxy server would do, sit in front of destination resources and provide routing and filtering. In addition to these basic functions, a service proxy can also handle authentication and authorization, providing single sign-on to secured endpoints. There are open source service proxies, such as Repose, that will support defined authentication schemes out of the box. Repose will support Open Stack Identity Service as well as Rackspace Identity Service with minimal configuration. Additionally, its modular, pluggable architecture allows for developing a custom filter to integrate with any other authentication/authorization provider.

The proxy can also provide vital protection to the lower layers in the system stack by serving as a circuit breaker, detecting load outside the normal range and preventing requests from flooding downstream. A sufficiently advanced proxy will perform this rate limiting in an intelligent way by cutting off access only for abusive users or IP addresses while maintaining service for other consumers.

Service Proxy

But wait, there's more! A service proxy can also provide a wealth of information about how services are being utilized. The proxy can generate logging that can be streamed to an analytics engine to provide insights into usage patterns. This can allow for better allocation of infrastructure resources as well as understanding how services are being utilized. Understanding which services are most valuable to consumers can inform the design of future services.

Besides the Repose proxy mentioned above, Netflix has open sourced the Zuul service proxy. This proxy integrates with other open source tools provided by Netflix to manage a micro service architecture. If you are interested in the details on some of the cool stuff Netflix has released to the community, check out their Github repo here.

Project Management 101: Time Management

By Michael Zachman

Here's the situation: You get to work and you have all of these tasks that you need to get done. You feel confident that you will get them accomplished, so you begin your day. You are fifteen minutes into your first task and the phone rings. You finish the call ten minutes later and think to yourself, "Now, where was I?" It takes you about five minutes to get back to where you were and an Instant Message pops up from a co-worker which is "urgent". You help them and are about to get back at it when someone pops their head into your cubicle and says, "Do you have a minute?" You want to respond with an emphatic "NO!" but you do the right thing and help them with their issue. The day continues on in this fashion and before you know it, you look up and it is time to go home. You have accomplished nothing on your list of tasks and you wonder, "Where did the day go?" Does this kind of day sound familiar to you?

It is very easy to get distracted into today's world especially with all of the technological advances over the last 25 years (i.e. Microsoft Lync, Instant Messaging, Online Meeting Sites, Video Conferencing, Email, Texting, etc). You have to be able to manage your time effectively and efficiently to maximize your productivity. Don't get too discouraged because there are many things that you can do to manage your time better and have a productive day. Here are a few tips to help you improve those time management skills:

  1. Set specific times to check your Email – Emails come in from the beginning to the end of the day and can consume all of your time if you let it. One thing that I have found helps is to set specific times of the day to check your email. I check my email first thing in the morning and first thing when I get back from lunch. I respond to emails at those times and ONLY during those times. Unless there are specific reasons to deviate from this, you will find that you get a lot more done when you have more extended periods of time to focus on the tasks at hand. This can also be applied to voicemails!
  2. Do Not Disturb! – If you absolutely have to get work done, put up a "Do Not Disturb" sign up so that everyone knows you need to focus. Again, in today's world, just putting up a sign outside your cubicle or office may not be enough. You may need to set all of your technical communication devices to "I'm Busy" as well. Set your Instant Messaging status to "Busy" or "Do Not Disturb". Close your email and turn off your smart phone so that you are not interrupted by emails and texts if the work you are trying to complete is that important. I have found that people are usually very understanding about this but believe me, if they really need you, they will find you.
  3. The 80/20 Rule – By the numbers, the 80/20 rule means that 80 percent of your outputs come from 20 percent of your inputs. Well, I think this can be applied to your time management skills: 20% of your actions, discussions, and thought processes produce 80% of your results. It is impossible to get everything done because there will ALWAYS be something to do. But if we prioritize our tasks and stay focused despite the inevitable interruptions, the 80/20 rule tells us that we will produce results. So don't stress out when you don't get everything on your list done and remember that based on the 80/20 rule, you have probably accomplished a lot more than you realize.

So, the next time you have a day where you don't feel like you have accomplished everything you wanted to accomplish, try using these 3 time management tips to maximize your daily productivity by investing your time wisely. You may just find that you are a little less stressed, a little more focused and have produced a lot more results than you ever realized possible.

"The key is in not spending time, but in investing it." – Stephen R. Covey

Use Cases for IaaS

By Dave Michels

One of the most pivotal advancements in information technology in the last 3 years is the advent of Infrastructure as a Service (IaaS). By definition, IaaS is a provision model in which an organization outsources the equipment used to support operations, including storage, hardware, servers, and networking components. The service provider owns the equipment and is responsible for housing, running and maintaining it. The client typically pays on a per-use basis. The most commonly known model for this is in the Amazon cloud, but other cloud providers such as Windows Azure also support IaaS as part of their offering.

IaaS is most easily justified in a scenario where an organization is just starting their IT services or products. As the provider hosts the physical infrastructure, there is little to no physical infrastructure costs incurred by the client. Building or leasing a data center is a costly endeavor if the resources required in order to get IT up and running is not known initially. An entire organization's physical infrastructure can be designed and setup remotely on the providers infrastructure. The key element here is that there should be thought put into what the logical structure of the network, subnets, firewalls, servers, segmentation, etc will need to be. Organizations should ensure that the virtual infrastructure that is to be setup is designed based on requirements for the services needed as to not waste time and effort. That being said, part of the benefit of using this model is that any issues or refactoring required is typically not nearly as costly as it would be if physical hardware were purchased, setup, cabled, configured only to find that it was not the best approach. With IaaS, tearing down and rebuilding is a matter of clicking through a management console to disable or delete existing servers, routes, and other virtual infrastructure then re-create based on new requirements.

Many larger organizations have already made a significant investment in their IT infrastructure. When a company invests hundreds of thousands or even millions of dollars in physical infrastructure such as servers, switches, SANs, and load balancers the case for IaaS is different. An important point to note is that IaaS should be looked at as a complementary aspect of existing infrastructure and not a means to replace it. The case where this is most evident is in transient infrastructure needs to non-persistent application requirements. For example, in an IT development shop that practices Continuous Integration or CI, the need for resources to execute builds and run automated unit and integration tests can be a bottle neck. If there are 50 different builds that are configured to execute based on changes detected from revision control a team may wait hours to receive build feedback or have deployment artifacts generated if there are only 3 servers that can execute builds in your CI environment. Now, that wait would vary based on the size of the projects, compilation time, and tests to be executed, but there is a bottleneck nonetheless. As the resources required to accommodate this transient spike in builds to be executed are expensive and time consuming to procure, it makes little sense to occupy valuable internal server resources for servers that may only be needed a small percentage of the time. Many CI server platforms already have plug-ins that support existing IaaS providers that will start cloud based servers as needed to accommodate this spike in build requirements, then shut them down when the build queue reaches a minimum threshold. The cost incurred is only the time that the server was running to execute builds and deplete the build queue.

The same case can be made in regards to automated load and functional testing. IaaS is ideally suited for the case of automated load test and simulation of production load distribution for public or globally distributed applications. Often times, large organizations have multiple data centers but they may not be geographically distributed far enough to come close to accurately simulating what their production load will look like. Using cloud based IaaS organizations can spin up virtualized servers in datacenters that can be dispersed throughout the world and coordinate tests that can simulate large traffic volumes coming from Asia, Europe, and South America hitting a site in your corporate datacenter in Columbus, OH. Again, once the tests are complete the services can be shut down and the costs incurred are minimal based on the length of time the tests ran and the servers were running.

Cloud providers that support IaaS, should be looked at as a tool that can easily supplement IT infrastructure for transient resources. Saving valuable internal hardware and infrastructure resources and more effectively simulating real-world conditions that cannot be easily replicated with existing resources. This translates into cost savings for what would otherwise be idle servers and better simulation of real-world production environments.

Lessons from the Battlefield - Stakeholder Management

By Erica Krumlauf

When you think of the most desired skills of a project manager, where does stakeholder management fall? I often find that this skill is one that falls low on the list for many organizations. They don't call it out as a required skill on their project manager job descriptions, and don't ask tough questions on how to deal with challenging stakeholders during job interviews. Why is this skill often overlooked? Is it that project managers and teams believe that stakeholders aren't important members of the project? Or do they believe you can overcome a ‘bad' stakeholder if you just push them out of the way and ignore them?

In my vast experience in program and project management of client projects, I have worked with a multitude of stakeholders; each of which carried different personalities. I have been screamed and sworn at during tough times; and I have been smiled at and hugged at the end of successful projects. However I have always had a high degree of mutual respect and trust with my stakeholders (yes, even those that screamed at me). Why is this? I believe it's due to the manner in which I have approached each of those stakeholders – be it the CIO at a multi-billion dollar international client, or the business manager of a small company. I hope you find these 5 keys to be beneficial to you in helping you to navigate the muddy waters of stakeholder management.

Are the right people around the table?

Often times, the biggest challenge with stakeholder management is the fact that key stakeholders are absent from the table. A stakeholder is defined as "any individual, group or organization that can affect, be affected, or perceived to be affected by a project". As project managers, we often come into projects well after the "stakeholders" are identified. But it is our job, and responsibility, to ensure that the right people have a seat at the table. Leave no stone unturned. Understand what systems and business processes are impacted by your project – and get out and talk to the business and other IT teams. Find out how they may or may not be impacted. If there are impacts, talk to key personnel and encourage them to sign up to be involved as a project stakeholder. Even if they refuse to come to meetings and be actively involved, they will be impacted and it is your job to ensure they understand what that impact is.

Okay I'm a Stakeholder. Now What?

Signing up to be a stakeholder is easy. But do your stakeholders understand what that really means? Everyone must understand their role and responsibilities; and understand what accountability they will hold on the project. Will they own providing requirements for how the new system will work? Will they own providing timely answers to outstanding questions? And if so, how timely must they be in decisions? Will they own ensuring resources from their area are responsible for completing tasks? Do they know the timelines for these tasks? Mutual understanding of accountability and ownership is key in stakeholder management. Make sure everyone has a clearly defined definition of their role and are accountable for the items that they own.

What's their Agenda?

Everyone on a project has an agenda, whether it be personal or for the benefit of the entire organization. Getting to know your stakeholders and what is important to them is critical to ensure alignment and buy-in from everyone. Get to know what makes them tick – what is important to them? What is their current opinion of the project? Are they in it to see it succeed or are they a naysayer who will do everything their power to try to make the project fail? Understanding each stakeholder and their attitude towards the project is key. Conduct stakeholder analysis to understand each person's expectations and how they define success on the project. And use this information to refine the project purpose and goals. Make sure that the project is a "W" for all stakeholders – not just the ones that scream the loudest.

We're all in this together.

Teamwork. Where would we be without it? To build teamwork you need trust and loyalty. Getting this from a group of stakeholders that have different agendas is challenging. But much like herding cats, the project manager must ensure that all stakeholders are in alignment and working towards the common goal. To do so you must build an environment of open communication, one where all stakeholders have the opportunity to speak up and provide input. Make sure the common goal is well known – and that everyone is striving to ensure it is met.

Do you hear what I hear?

Transparency. This is the 5th and final key in stakeholder management. A project manager that is dishonest, not forthcoming and hides key information, from any or all of the stakeholders, results in a disaster. Share with all stakeholders the same information, in a timely manner. Do not shelter certain stakeholders from details because you are fearful of their response, or because you don't want their opinions and thoughts. If all stakeholders have the same information and knowledge, they can work together to resolve any project issue.

Follow these simple keys and the results will be astonishing. You'll find better alignment in project scope, less personal agendas, and more collaboration in ensuring that the project goals are met. After all, if your stakeholders are in alignment, the rest of your team will follow suit and you'll all be high-fiving it at the finish line!

Hybrid vs. Native - What Should Be Your Mobile Strategy and Why?

By Keith Wedinger

Since joining Leading EDJE in February of 2012, I have been involved in several mobile app development opportunities both from a sales perspective and from a software architect perspective. One of the first questions that usually comes up from the client is this. Should I use hybrid or native to develop my mobile app? Before I answer this question, let's briefly go over what each of these choices are.

The hybrid approach generally means using web based technologies like HTML5, CSS and JavaScript to develop a web app that is then packaged and delivered as a native app deliverable using PhoneGap. To the user, it generally looks and operates like any other app installed on their mobile device. The biggest and most trumpeted benefit for this approach is this. Using one code base, one can target multiple platforms (think write once, run anywhere). But there is a tradeoff. Using this approach means that the app is restricted by what the web browser on the targeted device is capable of doing. Specifically, the UI for the app and the app's performance will be constrained by the device's web browser. On iOS devices, this is less of an issue because its web browser generally supports the latest web standards and performs well. On Android devices, this constraint significantly depends upon the Android version on that device. Versions older than Ice Cream Sandwich have browsers that typically do not perform well and contain several web rendering anomalies. Browsers in new Android versions perform better and contain fewer web rendering anomalies.

The native approach means using the development stack for that platform to develop a native app. For example, to develop a native iOS app means using Xcode, the iOS SDK, Objective C or Swift on a Mac and delivering the app to iOS only. The biggest benefits for this approach are performance and end user experience. The app is only constrained by what is possible within that platform's SDK and by the capabilities of the device. The biggest drawback often called out is this - When one chooses this approach, one must develop and deliver a completely separate app for each targeted platform. This increases the time and cost of developing and delivering an app by a factor of at least 1 for each targeted platform. There are development tools and platforms available to help mitigate this drawback and attempt to leverage skills you already have in house. For example, Xamarin is a C#/.NET framework based development platform that allows one to develop native mobile apps with a significant level of code reuse across the targeted platforms. Please note that this reuse is typically 60-70%.

Now, let's get back to our question. What approach should one use? Well, the answer is "it depends" and here is why. Before one decides on an approach, one must carefully consider the following.

Requirements. What business problem are you trying to solve? What benefits will the mobile app bring to your customers or to your enterprise? What data and/or systems will your app integrate with? Simply stating "I/We need a mobile app" is not good enough. Understand what you are going to develop and deliver and why.

Define your targeted user base. What devices are they using? What platforms and versions are they using? What screen sizes, resolutions and orientations do you need to support? Each variation can increase UX development time by 25-50% and UX testing time by 100%. If your targeted user based is using only one platform (examples: iOS, Android, Windows Phone), then the key benefit mentioned above for the hybrid approach may not be a factor.

Know your team. What skills can you leverage? Consider languages, platforms, UX design and QA when answering this question. Also know your existing code base? What can you reuse? Then map these skills to the skills and your existing code base to what is required for each approach

Understand the costs. This is independent of the approach one choses. For each device being targeted, a real device must be purchased for testing purposes. For each platform being targeted, one or more development workstations supporting that platform must be purchased. For example, targeting iOS requires a Mac to build and deliver the app.

After you carefully consider what is outlined above, then you are ready to make a knowledgeable mobile strategy decision. Also consider developing prototypes to help make and verify your decision. Avoid the "I have a hammer and everything looks like a nail" decision. One approach is definitely not ideally suited for all solutions.

Solving a Leaky Basement with Raspberry Pi

By Joseph Beard

When my wife and I bought our first house a few years ago, we thought we had everything figured out. We moved in mid-September and, after an initial issue with the heat pump, everything went smoothly. But we discovered just how quickly that can change when our basement flooded the next Spring.

After the initial panic wore off, I discovered that the sump pump had failed, allowing the water level to rise and overflow the crock. I replaced the sump pump and assumed that all would be fine. A few months later, however, I arrived home to another soggy basement. This time, the sump pump had shaken itself against the side of the crock and had trapped the float switch into the off position.

We were fortunate both times in that I happened to catch the flood as it was starting, which allowed me to save the carpet and many of our belongings from ruin. While I took measures to prevent the sump pump from moving out of place, I knew that it was only a matter of time before it somehow failed again.

I needed a way to be alerted of an impending disaster before it happened. My first step lead me to a simple water level alarm like this one. This saved us from two more potential incidents, but it comes with a critical flaw: it is only effective if someone is around to hear it. If we are on vacation or even just out for the day, the basement may still flood and no one would know until it was too late. Since I always have my phone, I wanted something that could send an alert, via SMS or email, if something was going wrong. This would allow me to call a friend or neighbor to check on things and save the day even if I am across the country.

I played with a few ideas with an Arduino, but I was never satisfied with the networking options available to the platform. When the Raspberry Pi was announced, I knew I had finally found the device that I needed: a small, extremely low-powered Linux system with a full suite of the standard tools. In other words, it was cheap and reliable. I immediately preordered and (eventually) received one of the first shipment Model B devices.

Raspberry Pi

The Milone Tech eTape Continuous Fluid Level Sensor is a printed, solid-state sensor with a resistance that varies in accordance with the level of the liquid in which it is immersed. No moving parts to get stuck! I used the MCP3008 Analog to Digital Converter and a custom differential amplifier circuit to interface the analog sensor output with the General Purpose Input and Output pins on the Raspberry Pi.

Milone eTape Sensor

I wrote a simple Python script to periodically poll the current value of the eTape sensor. Since the output value from the ADC is a 10-bit integer (i.e.: between 0 and 1023), this was an appropriate place to convert the value into a depth (in inches). This script publishes the readings to a ZeroMQ topic.

Another python script subscribes to the topic. When the value exceeds a threshold, it sends an email to my wife and I alerting us of the issue. It follows up with another email when the value returns to normal.

A third script subscribes to the topic and archives the readings into a data file. I would like to use this data in the future to enhance the alerting capabilities. For example, if the sensor value swings wildly or remains relatively static for an unusually long period of time, it may indicate a malfunction. For now, this script merely collects the data.

Finally, because no sensor project is complete without charts, a final script subscribes to the topic and forwards all of the sensor readings to Xively. Xively provides a simple way for me to view the current water level and chart recent values from anywhere. The graph below shows a recent 24 hour period of readings taken.

water level chart

Ironically, there has not been a single high-water event since building and installing this system, but knowing that this system is in place and functioning gives me significant peace of mind whenever I travel.

Java NIO and Netty

By Andrew May

The java.nio package was added to the Java Development Kit in version 1.4 in 2002. I remember reading about it at the time, finding it both interesting and a little intimidating, and went on to largely ignore the package for the next 12 years. Tomcat 6 was released at the end of 2006 and contained an NIO connector, but with little or no advice about when you might want to use it in preference to the default HTTP connector, I shyed away from using it.

So what is NIO anyway? It appears that it officially stands for "New Input/Output," but the functionality added in Java 1.4 was primarily focused on Non-blocking Input/Output and that's what we're interested in.

In Java 1.7, NIO.2 was added containing the java.nio.file package that tries to replace parts of and the "New" monikor makes more sense, but NIO.2 has little to do with what was added in NIO. So it's another Java naming triumph.

The traditional I/O APIs (e.g., InputStream/OutputStream) block the current thread when reading or writing, and if they're slow or perhaps blocked on the other end then many threads can end up unable to proceed - this is how your web application grinds to a halt when you have a database deadlock and all 100 connections in your connection pool are allocated. Each thread can only support a single stream of communication and can't do anything else while waiting.

For a servlet container like Tomcat, this traditional blocking I/O model requires a separate thread for each concurrent client connection, and if you have a large number of users, or the connections use HTTP keep alive, this can consume a large number of threads on the server. Threads consume memory (each thread has a stack), may be limited by the OS (e.g., ulimit on Linux) and there is generally some overhead in context switching between threads especially if running on a small number of CPU cores.

I still find the Non-blocking I/O support in the JDK to be somewhat intimidating, which is why it's fortunate that we have frameworks like Netty where someone else has already done the hard work for us. I recently used Netty to build a server that communicates with thousands of concurrently connected clients using a custom binary protocol. Out of the box Netty also has support for common protocols such as HTTP and Google Protobuf, but it makes it easy to build custom protocols as well.

At its core is the concept of a Channel and its associated ChannelPipeline. The pipeline is built up of a number of ChannelHandlers that may handle inbound and/or outbound messages. The handlers have great flexibility in what they do with the messages, and how you arrange your pipeline is also up to you. You may also dynamically rearrange the pipeline based upon the messages you receive. It's similar in some ways to Servlet Filters but a lot more dynamic and flexible.

Netty manages a pool of threads in an EventLoopGroup that has a default size of twice the number of available CPU cores. When a connection is made and a channel created, it is associated with one of these threads. Each time a new message is received or sent for this channel it will use the same thread. To use Netty efficiently you should not perform any blocking I/O (e.g., JDBC) within one of these threads. You can create separate EventLookGroups for I/O bound processing or use standard Java utilities for running tasks in separate threads.

The API assumes asynchronicity; for example writing a message returns a ChannelFuture. This is similar to a java.util.concurrent.Future, but with extra functionality including the ability to add a listener that will be called when the future completes.

							channel.writeAndFlush(message).addListener(new ChannelFutureListener() {
							public void operationComplete(ChannelFuture future) throws Exception {
									if(future.isSuccess()) {
									} else {

Netty is under active development and in use at a number of large companies most notably Twitter. There's a book in the works but the documentation is generally good and the API is fairly straightforward to use. I've found it a pleasure to use and would recommend it for projects that require large numbers of concurrent connections.

Using the Decorator Pattern

By Nathan Kellermier

The decorator pattern is used to extend the functionality of an object, similar to inheritance. What sets the decorator pattern apart is the pattern can be used to dynamically extend the functionality of an object without requiring all instances of that object to include the extended functionality. In this way, the functionality can be added or removed at run-time based on user interactions or as the result of a business rule.

While examining the decorator pattern we see it consists of an interface, a concrete implementation, an abstract implementation forming the decorator base, and the actual decorator classes. The goal of the pattern is to provide the ability to wrap the concrete implementation with the decorator classes and provide new and/or differing functionality from the original object.

Looking at an example where there is a MessageProvider class containing a single method that returns a message, we will see how the decorator can be used .

First , we need an interface and concrete implementation that returns the message passed to the constructor.

								public interface IMessageProvider
									string GetMessage();
								public class MessageProvider : IMessageProvider
									private string _message;
									public MessageProvider(string message)
										_message = message;
									public string GetMessage()
										return _message;

With the concrete classes in place, we can create the decorator hierarchy to extend the functionality of an IMessageProvider class. We will create two decorators, the first inserts text before the message in the MessageProvider, and the second simulates logging to the console by writing out an IMessageProvider's message before returning the message to the caller. The decorator hierarchy consists of an abstract base class specifying that an instance implementing IMessageProvider (described above) needs to be passed into the constructor. The IMessageProvider instance is then stored in a variable and a base implementation is created that acts as a pass-through to the stored IMessageProvider.

							public abstract class MessageProviderDecoratorBase : IMessageProvider
								protected IMessageProvider _messageProvider;
								protected MessageProviderDecoratorBase(IMessageProvider messageProvider)
									_messageProvider = messageProvider;
								public virtual string GetMessage()
									return _messageProvider.GetMessage();

Having created the decorator base class, we can look at the first of the decorators. The GreetingMessageDecorator is a simple decorator that takes in an IMessageProvider and adds a simple greeting to the message returned from calling GetMessage. The purpose of this decorator is to demonstrate how the decorator can add functionality to an object that implements IMessageProvider.

						public class GreetingMessageDecorator : MessageProviderDecoratorBase
							public GreetingMessageDecorator(IMessageProvider messageProvider) : base(messageProvider)
							{ }
							public override string GetMessage()
								return "Hello, your message is: " + base.GetMessage();

The second decorator in the sample application is the ConsoleLogMessageDecorator. This decorator performs an action on the IMessageProvider it is was provided during construction before eventually returning the result of the GetMessage method to the caller. The action taken is a simulated log of GetMessage's result written to the console. As simple as this action is, it could be extended to a timer wrapping the call to the IMessageProvider's method, special handling, or a before/after type of action.

						public class ConsoleLogMessageDecorator : MessageProviderDecoratorBase
							public ConsoleLogMessageDecorator(IMessageProvider messageProvider) : base(messageProvider)
							{ }
							public override string GetMessage()
								string message = base.GetMessage();
								Console.WriteLine(Environment.NewLine + "LOG: {0}" + Environment.NewLine, message);
								return message;

Finally, we have a simple driver application. The driver builds up a MessageProvider and then creates the decorators, displaying the result of the GetMessage call in each version. Note, the ConsoleLogMessageDecorator takes as input the GreetingMessageDecorator, prints it to the screen as a log message, and returns the chained results of each decorator performing it's action. The output demonstrates how the decorators have affected/interacted with the object being decorated, and, in the case of the ConsoleLogMessageDecorator, how chaining decorators works.

						class Program
							static void Main(string[] args)
								IMessageProvider messageProvider = new MessageProvider("Sample message");
								IMessageProvider greeting = new GreetingMessageDecorator(messageProvider);
								IMessageProvider logDecorator = new ConsoleLogMessageDecorator(greeting);



code output

Although the example presented is simple, the decorator can be used to implement complex before/ after actions by chaining operations in multiple decorators. One could add timing, logging, and transactions to the same object simply by creating the appropriate decorators and adding them in a chain at runtime. A business rule could determine that a special row needs inserted in a database marking an entity in some way and a decorator adding the needed functionality could be added to the object when the rule is triggered, changing the course of action for the object in question.

The source code for this article can be cloned from Github here for use under the MIT license.

Project Management 101: Managing Client Expectations

By Michael Zachman

Here's the situation: You are the project manager on a project and all seems to be going well. You are about halfway done with the project and are on schedule and on budget according to your plan. It is time for a steering committee meeting with the client and you are ready. During the meeting, you provide a walk-through of the application and its capabilities when the client stops you and says in an irritating and frustrated voice, "This is not what we asked for and does not meet our requirements!"

This can be a very frustrating scenario but happens more often than you think. The good news is that there are steps that you can take to mitigate your project risks and make sure that the client is on the same page with you and getting what they want. Here are three (3) things you can do to help you manage your project more effective and efficiently:

  1. Set Expectations Early and Often – From the day you step into the client and begin managing the project, you need to ensure that you are setting expectations with the client based on the current project parameters (budget, schedule, resources, vendors, etc.). As project parameters change, be sure that you are updating the client on how this impacts the project. Whether it is good news or bad news, they have hired your expertise and will appreciate you giving it to them straight.
  2. Document Everything – No matter how trivial or how complex it seems, you need to document everything. Take good notes (I highly recommend Microsoft One Note) during meetings and phone calls, and keep all project emails organized so that you can refer to them later. Project documentation is essential at all stages of the project and you need to get signoffs/approvals from the client to ensure that they are in agreement with what is being produced for them. This does not always mean that things will not change but when you have a record of what they agreed to it is difficult for them to argue about the impact of any changes to the project.
  3. Communicate, communicate, communicate – You have heard with real estate that it is all about location, location, location. Well I say with project management that it is all about communication, communication, communication. There is no such thing as over- communication on a project (or in most areas of life!). If you are communicating to your client audiences appropriately and consistently, there will be less of a chance for misunderstandings and more of a chance for a smoothly run project.


So, to stay away from being blind-sided by misguided or unrealistic expectations, try using the 3 ideas above to manage the client's expectations to a successful project outcome that everyone wants to achieve. I think you will find that there will be less confusion, more understanding and in the end, a client who will be pleased with the results and thanking you for a job well done!