The Making of Future Makers

by

UAMaker logomark jpeg

Control Group is a proud Founding Partner of a new high school, called the Urban Assembly Maker Academy, which opened last week in our Lower Manhattan neighborhood. UA Maker is a Career and Technical Education (CTE) school with a mission to empower students to be successful, adaptive citizens through challenge-based learning and the principles of design thinking.

UA Maker’s curriculum prepares students for both college and careers by teaching them how to use design thinking and technology to solve problems. The school features a new kind of classroom experience that models aspects of the modern agile workplace so that students can develop the skills, tools, and habits of inquiry to be tomorrow’s “makers.”

Control Group got involved with UA Maker Academy because we believe that the world’s challenges require problem solvers who are equipped with both critical and creative thinking skills. They will need to be curious about the world around them and empathize with others in order to develop the best solutions for people, communities, and businesses. Beyond a textbook education, the next generation of strategists, engineers, and designers deserve exposure and experiences in tackling real world problems.

In our business, we use principles of design thinking to create successful products and experiences for our clients. By leveraging a human-focused mindset, we have a clear path and method for collaborating with stakeholders to create the most impactful solutions. In collaborating with the Urban Assembly, and an energized group of industry and higher ed representatives, an amazing group of ambitious and talented educators are providing the students with an opportunity to approach their world with empathy, confidence, and action as the backbone of their high school experience. This is just what we need to build the future.

 

Data Freedom: Part 2 of 3

by

Sure, it’s not exactly like Scottish independence, but I feel like William Wallace might still give us the nod for our own effort at (data) freedom.

A few weeks ago we started looking at data freedom because, while there are many advantages to using SaaS vendors, there are some issues to keep an eye on. One of those issues is finding ways to access and use the data that’s been sent out into the vendor’s system. The first installment of this series was about a small problem with a fast solution. We didn’t have to worry about real-time or frequently-changing data.

But for Vendor 2, things weren’t so easy. Like well-known #2s Art Garfunkel and Ed McMahon, Vendor 2 is easy to overlook but nonetheless necessary on a day-to-day basis. Vendor 2 is one of those internal tracking vendors we use every day with data that changes quickly and often.

Vendor 2 got the job done for us, but sadly, their reporting left something to be desired. Sure, they had reports, but there was no way to link to external data. And don’t get me started on getting it to do any complicated slicing-and-dicing. We ended up with a lot of people who needed to pull down spreadsheets and re-do the same calculations month after month. We heard the cries from people-who-will-remain-nameless (but who are me):  “I can write the darn SQL if you just let me!”

So, how did we setup a system that uses SaaS vendor data but reports the way we want it to? We setup a system to copy their information to a database we control… and then we wrote the darn SQL.

Easier said than done, for sure. For this case, we called in bigger guns and took a look at Talend, a full Enterprise Service Bus (ESB) solution that enabled us to create our own data store. The goal was to create a data store on our own terms that can auto-update as information changes on the vendor’s side in near-real time via Vendor 2’s full-featured API. Now we can do what we need with the data: write the SQL for static reports or hook up a BI tool to view it. Whatever we need.

Just that easy? Well…

TalendScreen

That “easy”

In this case, we used the Community Edition of the ESB to see what it could do. One thing we found right away was that Talend organizes things two ways: “jobs” and “routes.” The routes side is based on something Enterprise Architecture veterans will know well as Apache Camel. Working with an agreed-upon standard has its own advantages, but we also found routes to be more robust than jobs. For instance, they had an ability to handle exceptions, such as the API responding slowly, or handling cases where we needed to “page through” long sets of data. With that, we were off and running with a few hurdles to hurdle.

Nice Flow Diagrams Do Not Mean Non-Technical: Starting with a “route”, we went data object-by-object to create a parallel data model on our side so we could write the SQL and map each to a specific API call. To the uninitiated, not-so-user-friendly Camel calls look like this:

.setHeader("RowCountInPage").groovy("headers.RowCountInPage = headers.RowCountInPage.isInteger() ? headers.RowCountInPage.toInteger() : 0")

Not exactly drag-and-drop syntax. That’s a fairly simple one, actually, but even still it’s using Camel along with groovy templating — and it can only be viewed or edited via a “property” of one of those flow icons, not in a text file. The GUI aspect falls away fairly quickly.

In short, this is a case that called for real development. It’s not rocket science but also not to be taken lightly. Don’t let the nice flow diagram fool you.

An API Is A Unique Blossom (sometimes): On the Vendor 2 side of things, they do have an API, but there were no quick answers here. You can do an awful lot with a full-featured API, but it might take a while to learn how to do it, as each API is a little different. In this case, each call required crafting a specific XML structure, with a unique manner of getting large data sets by page and sometimes opaque terminology. There was no easy “getProjects()” type of call to fall back on. We were able to work our way through Vendor 2’s documentation but it also made us appreciate a solution like we designed for Vendor 1, which allowed us to avoid that level of mucking about in somebody else’s data model.

And Here You Thought Things Like Version Control Were Straightforward: Just when you thought you had git mastered and thought it’d be easy to work in a team again, along comes a centralized system like this. As it turns out, a Talend workflow isn’t just based on a few nice and editable XML files. Instead it creates sets of files and auto-updates portions of them in unpredictable ways. For instance, the updated date is part of a project file, so every save “changes” the project. Be sure to close Talend before committing your files since they change when the Studio product is running!

Talend, the company, wants you to upgrade to the paid edition to have their internal version control, but that would also mean a separate repository specifically for their tool.  In the end, we got it to work in our version control and lived to tell the tale. Unfortunately there were bumps in the road in places we thought might roll like the autobahn.

In general, Talend worked for us, but using the Community Edition wasn’t always so straightforward. For instance, going with the “routes” side of Talend skewed from Talend’s main offering in favor of the more standard Camel implementation. Using routes meant we could leverage lots of Apache Camel documentation but it cut off all sorts of Talend’s own forums and documentation, which were focused on the “jobs” side. Alas, there wasn’t an easy middle-ground to utilize the positives of both sides.

In the end, Vendor 2 was a lot more work to integrate than Vendor 1. That’s no surprise.  But, now that we have it up and running, the volume of information we’re now capturing and updating is huge. Now that we have it implemented we can write those reports however we want: Business analytics packages, home-written darn-SQL statements, etc. And the Excel re-work won’t be necessary. We did this all without touching the main functions of Vendor 2.

We took on a lot more configuration work, but now find ourselves with a full backup of our data– able to do what we want with it, not what we can. This level of integration also makes us a little less dependent on Vendor 2. Should we need to swap them out someday, we will start with all our historical data completely at the ready.

After all, even Simon and Garfunkel eventually broke up.

Summer Internship 2014: Visualizing Workplace Data

by

This is a post by Samuel Lalrinhlua, a student at Syracuse University in the Master of Science in Information Management (2015) program. He was also a summer intern on our Enterprise Architecture team. 

I first came across Control Group when I read the ‘Best Places to Work 2012’ list published by Crain’s. I was immediately drawn in by the photo of their StarTrek-esque hallways and thought to myself “that would be a cool place to work”.  But I never thought in a million years that I would actually get an opportunity to work for this company and would be writing about my internship experience on their blog.

When I arrived in June I was given a detailed description of the projects that I would be working on this summer: add visualizations of CG data on the monitors that hang above the Support Center and find other interesting ways to show data around the office. My fellow intern, Soohyun Park, and I were asked to collaborate and create visualizations that used and displayed dynamic data.

Conf. Room Availability

I worked on several applications, such as Talend and PostgreSQL, to extract relevant internal data such as Personal Time Off (PTO) status, work anniversaries, timesheet usage and project status, among other things. All of this data was used in creating the visualizations that are now shown on the big screens in the office.  Many of these technologies were new to me and it took some troubleshooting along the way to see results. Soohyun and I also developed an iPad visualization that displays the status of the conference rooms. Red shows “booked” and green shows “available”. App development was new to me and I learned a lot from this experience.

I am glad that I got to spend my summer with CG. I have gained an invaluable experience, both professionally and personally. Thank you all for your support– and for the coffee (I’m going to miss that!). And thank you for making me a part of the Control Group team this summer

“Live long and prosper.”

Summer Internship 2014: Designing for Scalability

by

This is a post by Alex Daley, a student at Elisabeth Irwin High School (Class of 2015) and a summer intern on our DevOps team.

I have been highly interested in mixed types of engineering for most of my high school career. On the hardware side, I have launched Arduinos into the stratosphere, led our robotics team to victory, and led seminars for teachers interested in 3D printing. On the software side, I have built apps and designed websites. The common theme has been that these projects have been pretty “hacked together”. I quickly built things that worked, but probably weren’t scalable and were rarely reusable.

I came to CG this summer to work with cloud services and sensor networks. I was tasked with the design and implementation of a highly-scalable, real-time sensor network. I had worked with sensors in the past, but building something on a large scale that had to be solid enough to expand was an interesting challenge. The goal was to have a number of sensors report data to a central location.

A few years ago, that central location would have probably been an SQL database. Before starting at CG, that is definitely how I would have implemented it. Instead, David introduced me to a service from Amazon called Kinesis. Kinesis is a “data stream” that allows really large amounts of data to be collected and retrieved with no latency. Not only was it immensely scalable, it was also ridiculously simple. I had no experience with Amazon Web Services when I came to CG, but I was able to get Kinesis working in a few hours. Just like that, the entire backend for the sensor network was taken care of.

For my first shot at the actual network, I started somewhere familiar: the Arduino. I was convinced that the small, inexpensive board was perfect for every application, as I had used them on everything from automatic fish feeders to weather balloons. I hooked up a temperature sensor, plugged it into a computer, and used a python script to parse the serial data and send it to a PHP site that would send the readings to Kinesis.

Arduino

If this sounds ridiculously complicated and prone to failure, that’s because it was. Like I said, I was used to hacking things together and getting them to quickly work. This setup did work. However, it was almost completely limited to this one case. If I added another sensor, I would need to modify almost every single step in the pipeline. It also only handled one type of sensor, and was completely susceptible to data corruption and had a number of bottlenecks. It wasn’t even close to what the project needed to be. But it was a start.

The first improvements I made were meant to simplify the pipeline. If I was going to have thousands of sensor locations, I shouldn’t need a laptop at every one of them to connect the sensors to the internet. There are a number of cheaper, faster, and more direct ways of getting online. In addition, the PHP gateway would have to go, as it acted as a severe handicap on the much faster Kinesis service. I would have to access Kinesis directly from the Python code.

Solving the first problem was simple. An Arduino, paired with an ethernet shield, can connect to the internet. The second problem posed a serious issue: Arduinos don’t run python. An ethernet-enabled Arduino could send requests to the PHP site all day, but writing a C library to talk directly to Kinesis was impractical, considering I had six weeks.

I went in search of another board. The Raspberry Pi is a similar price and size, but has built in ethernet and runs Python. It was perfect.

The Pi had one flaw, however. Unlike the Arduino, which has hundreds of well documented and easy to use sensors available, the Pi was harder to integrate with the physical world. I had been using the Grove System with the Arduino, a selection of sensors designed to have plug-and-play functionality. One of the key goals of the project was to give other people the capability to add onto it after I left, and the Grove System was perfect for this. However, it only worked with the Arduino. Or so I thought.

The great thing about open source hardware is that when enough demand for a new feature exists, someone in the community builds it. Such was the case with the Grove System and the Raspberry Pi, which were linked by a project called GrovePi, a shield like device that essentially acts as an Arduino. It reads the sensor values, and then translates them into data that the Pi can understand.

Raspberry Pi

I had a solution to the problems of reading data and getting it to Kinesis reliably. However, the system still lacked the efficiency and scalability that it needed. I was still just sending streams of sensor readings that didn’t have meaning to anyone who didn’t know the exact setup. There was no way to tell what type of sensor data was being sent, and whether or not the data was intact or valid. To solve this, I initially put together a basic protocol, that basically looked like this:

{1.24.2, “Temp”, 57}

This piece of data meant that the temperature sensor at 1.24.2 (more on the sensor ID system later) had a reading of 57 degrees. Once again, this system worked, but strictly within this context. It had a number of problems. Defining sensor types by strings is really unnecessary and prone to errors, a ton of extra data is in there, and if part of that update was corrupted, the code would have no idea, and send it to Kinesis anyway.

The solution to all of these problems came in the form of Protocol Buffers. Developed by Google, Protocol Buffers allow you to define data models in an external file. For example, I had a model for a sensor report, which had fields for type of sensor, reading, timestamp, and more. The file is used to encode the data, making an update only a few bytes long. After it is received, if it is intact, the data is reconstructed into the easily-accessible object.

While I was building this system, my initial feeling was that this was overkill. I thought, those few bytes didn’t really matter for what I was building, it wouldn’t make a difference. But David continued to tell me to consider the potential scale. If I had tens of thousands of sensors, it would matter.

Now that I had a super-efficient sensor sending data to the cloud, I needed to think about expandability. The first thing I did was define a few concepts that the sensor network would be built around. There would be clusters, each one based on a Raspberry Pi. Each cluster would have sensors, which were individual data producers. Clusters were each part of a network. Defining the structure let me create a protocol for identifying each sensor, that looked like this:  (Network ID).(Cluster ID).(Sensor ID)

Initially, I had configured the cluster to send a Kinesis record for every single sensor update. This meant if I wanted the state of a cluster once per second, and there were 10 sensors on the cluster, I would need 10 Kinesis requests per second, which would become impossible quickly, as our stream was limited to 1,000 write operations per second. In addition, the requests took time to send. The data size was not the bottleneck here; the actual connection was. The solution was combining sensor reports into single cluster reports, so that when a cluster wanted to send out an update, it would gather all of its data, package it up, and send it all along in one request to Kinesis. This approach saved a few tenths of a second, an amount of time I would have considered meaningless a few months ago. My early experiences with this project made me realize how critical that amount of time could be. When scaling to thousands of units, every microsecond counts.

The Raspberry Pi was initially difficult to manage, because to control it, I needed to know the IP address, which I couldn’t get without hooking it up to a display. This was not practical for configuring a large number of Pis. The solution was putting a startup script on the Pi that emailed the IP address to me every time it started up. This simple fix made SSHing easy.

The startup script option also allowed me to get the data reading and Kinesis updating to start immediately, without any commands. This allowed for literal plug-and-play functionality; after simply plugging the cluster into the wall, records appear in the Kinesis stream.

With the prototype cluster done, I turned my focus to appearance. I had never worried much about making my projects look nice, but the cluster looked much more sinister than it should have, sitting in a corner of a conference room:

Conference Room Sensor

I set out to design an enclosure. I went to staples and bought a large piece of foam board and a bottle of glue. I was thoroughly in arts and crafts territory. Using a cardboard pi case as a template, I starting cutting away until I had an enclosure that held the Pi and the sensors. It was less than attractive:

Ugly Pi Case

But that is what prototyping is all about. After deciding some changes to port locations, etc, I jumped into the absurdly simple online 3D app Tinkercad to design a 3D-printable case. Getting it printed was as simple as taking a train up to the MakerBot store in Greenwich Village and dropping off the file. A few hours later, I had a respectable, CG-themed enclosure:

CG Pi Case

After a three week break, I was back at CG to see if this thing was really as scalable as I thought. While I was gone, parts for 9 (nine!) more clusters were delivered. It was awesome:

nine sensors

The nice thing about designing with scalability in mind was that when it actually came to scaling, everything worked. I was surprised, as I had been expecting hours of troubleshooting errors resulting from expanding the network by a factor of ten. Instead, there were no major problems. If I wasn’t already sold on scalable design, I was now.

This internship was a short six weeks. As with all projects, there are a ton of ways that this could be taken further. I would have like to set up a better way to communicate with the clusters. Kinesis, by design, only allows for one-way communication, so the service cannot talk back to the clusters. A web interface with the capability to change settings on each cluster would have been awesome. In addition, I didn’t get to spend much time on the consuming side of the project. I had a simple program that displayed the realtime results, but if I had more time, I would definitely look at visualizing the data.

I’ve mentioned it before but I can’t overstate how foreign the concept of scalability was to me when I came to this job. I built things with the main intent of getting them working the quickest way possible. Scalability was a waste of time if the project was small. These six weeks have completely changed my engineering mentality. Designing for potential size, instead of current size, not only allows scaling to happen smoothly when the time comes, but also leads to more solid code overall, and is invaluable to projects of every size.

SXSW Panel Picker: Transmedia Storytelling in the Age of Proximity

by

SXSW Panel Picker is OPEN!

We are really excited about LTE Direct, a new mobile technology from Qualcomm that’s due to hit the market in about 15 months. So we thought it would be a great idea to partner up with Qualcomm, the very folks who created the chip, and Titan, a leader in out-of-home advertising, to explain what the opportunity will be for brands once LTE Direct is made public.

With mobile applications and proximity technologies like bluetooth beacons and soon LTE Direct (LTED), synchronized campaigns will harmonize media noise and offer deeper engagement. Customers will begin an experience on their laptop, re-engage in a public space, respond to a call to action on their phones, and connect the dots with attribution in a retail store. This chip is going to change the mobile, retail, and out of home industries forever.

So if you think this sounds like an important topic for the SXSW community, VOTE for Transmedia Storytelling in the Age of Proximity! And tell your friends too! #LTEDsxsw

Future of Transit: Thoughts on Helsinki

by

Perhaps flying cars are not the future of transportation.The city of Helsinki recently announced plans to transform its existing public transportation network into a comprehensive multi-modal system that, in theory, would render cars unnecessary. Our in-house urbanists and transit experts, Jeff Maki and Neysa Pranger, provide their perspectives on this ambitious plan and what it could mean for NYC.

What’s your overall perspective on the Finnish plan?

JM: I think the Finnish plan well accounts for the direction technology is going– personal, mobile, ubiquitous and on-demand. In fact, I was interested to read the Masters thesis, which was the basis of the Finnish recommendation because it applied the logic of the Internet and its development in the US, as well as trends in the American energy market, to the future of public transit.

It’s an interesting perspective and theory of evolution for transit, especially coming from a country that is typically more friendly to state-owned infrastructure as opposed to privately-run, “market driven” approaches, usually found in the US. The focus on “millennials” and their unique perspectives on public services was also great to see in the thesis.

NP: While seamless travel options through integrated wayfinding and payment are not new ideas, Northern Europeans are once again pioneering an innovative, city-scale transportation initiative much as they did with congestion pricing, bike share and pedestrian friendly streets. The proposed plan by the Helsinki City Planning Department will be watched by many as citizen expectation for reliable service rises. While at the same time, however, cities and states grapple with the ability to improve services due to structural (decline in gas tax revenue, for example) and political (enacting new tax revenue remains highly difficult) funding constraints.

But Helsinki is doing it at the right time, as those most likely to embrace the “shared economy” (e.g. Airbnb, Uber, BikeShare and TaskRabbit, etc.) move into a prime user demographic. Overall, what Helsinki is proposing is highly innovative but will also require intense collaboration between public and private providers, special attention to equitable provisioning and continual pilot testing on how different user segments’ needs are addressed.

What’s it going to take for people to give up their cars?

JM: To be honest, I think it’s just time. It’s already happening. There’s been a lot published recently about millennials and their declining rates of car ownership. You can take a bus from NYC to Boston for a few dollars now– it’s certainly not the price holding people back at this point. It’s about shifting expectations to a “shared” mindset.

You might summarize by extending some of the logic from the Finnish Masters thesis: if the road network (and the car) was a symbol of freedom to the “boomers”, the Internet might be that same network to millennials. And it’s our task as designers of personal mobility systems to figure out how to enable mobile devices and other Internet-connected things to provide that same sense of freedom afforded by the car. That’s the thing that will cause people to switch, I think.

NP: I think it’s useful to remember that in New York City at least, owning a car is difficult already as the City has multiple public and private systems, like ZipCar, buses, bike share and subways. As a result, nearly half the residents in Manhattan do not own a car and car ownership city-wide is on the decline.

But for users to move away from personal car ownership permanently, they’ll need to be presented with a time-competitive option for getting from point A to point B for a number of different purposes. Also key to this will be the frequency of service (how long will I have to wait?) and reliability (does it show up when it’s supposed to?).

Other requirements include:

– one or two seat rides: moving from one mode to the next can be cumbersome, especially for the elderly or parents with strollers;

– a cultural shift in perceived benefits of owning a car (going from ‘privilege’ to ‘curse’);

– support from mayors and governors, including strong messaging and the right package of policy incentives to back it up.

How will the Internet of Things play into this?

JM: This plan requires that shared services– buses, car rentals, taxis, subways, etc.– will need to be connected to users. The Internet of Things is that connection, so I see its role as bringing the ability to engage with more physical systems to our phones via the Internet (or whatever form that might take in the future.) And it’s important to note that the interaction will go two ways: transit operators get data from users, and users get data from transit operators.

NP: I completely agree. Helsinki will find it difficult to get their system off the ground without real-time data availability and connected systems– both of which will be powered by the Internet of Things.

Is such a system feasible in a city like New York?

JM: Of course. We already have many of the pieces here. A ubiquitous network of taxis, an extensive transportation network in the form of commuter rail, subway and bus; car share vendors, car rentals; informal bus options and two world-gateway airports.

If there’s any barrier to realizing the Finnish plan here, I think it’s the lack of integration. Elsewhere, one organization operates many of these modes, but in NYC you have multiple organizations and little integration, making using these services more tedious– different fare cards, different mobile apps, etc. Getting the MTA, Port Authority and the City to form a working group charged with integrating transport in the New York region would be a huge step towards the Finnish plan, and a way to encourage people to use other options.

NP: While Helsinki’s motives for developing Mobility as a Service are driven by trends in the marketplace, environment and demographics, New York would likely be driven by others: relieving traffic that wreaks havoc on the economy, improving public health and safety of pedestrians, and solving the ever-ominous need to fund better public transportation options. Over the last five years New York has pointed to congestion pricing as a solution but that has not proven feasible so far. But addressing New York’s needs can be done many different ways– including increasing the supply of other time-competitive options, such as bike to ferry or bus to bike. So the development of shared systems such as Helsinki could be realized in New York and publicly supported.

The MTA, for example, spends fifteen cents of every fare dollar paid towards collecting that fare. Sharing fare collection across ten different systems that collect $10 billion in annual revenue would mean savings in the range of $1.5 billion dollars. That’s a strong argument for integration!

What would you do for NYC?

JM: As one concrete proposal, I would better integrate paratransit into NYC’s mass transit system. It’s the publicly-operated system we have that closest to the type described in the Finnish proposal. It’s also one that receives a lot of Federal funding, so the potential to innovate around it is potentially huge. There were plans to replace paratransit with taxi vouchers a few years ago– but what if we added paratransit to the transportation network and redirected that money towards programs that serve both those with special mobility needs as well as the general public? There are challenges here, but nothing that can’t be solved.

NP: We could pilot a shared system in Lower Manhattan, where there’s already limited parking, a residential and business population and access to several public and private systems including bike share, PATH, buses, subways, and ferries.