What does Apple Pay mean for retailers?

Apple PayYesterday Apple announced a contactless payment technology in the iPhone 6 and iPhone 6 Plus called Apple Pay, which will allow consumers to buy products in-store with just a tap of their iPhone. Apple Pay leverages a combination of iPhone hardware including NFC technology, Touch ID, and Secure Element — a dedicated chip that stores encrypted payment information — to process payments and ensure the security and privacy of personal data. In fact, both Apple and the retailer won’t know what shoppers are buying because a one-time payment number and a dynamic security code will be used to complete the transaction.

This comes at a time when Chip-and-PIN systems are becoming standard security. The Chip-and-PIN system experienced a fragmented roll-out and low user adoption in the U.S. due to high hardware costs and slow credit/debit card transition to chips, but it has been widely adopted in Europe over the years. But now both strips and PINs are showing their insecure vulnerabilities. Recently, a video went viral that shows how easy it is to steal a PIN with cheap and easily accessible infrared cameras (that attach to iPhones no less!). 

In typical Apple fashion, Apple Pay puts consumer needs first, so anonymity of purchase data was touted as a major feature. With Apple’s understanding of refined user-interaction, the secure finger scan and NFC swipe is positively frictionless compared to typing in a PIN number, which is the security mechanism used by Google Wallet and other digital wallet platforms. And now with Apple leading the market into NFC acceptance, NFC reader penetration at point-of-sale should be much higher. It may also accelerate the killing off of retailer-specific apps by moving the “Uber model” of payment from a specific app into the Apple ecosystem. All of this may be the perfect salve for jittery consumers who are frightened after the Target and Home Depot data breaches.

But is also means that retailers will lose a tremendous amount of value and customer insight found in credit card purchase data. While the loss of transaction data may seem like a huge hit to retailers and their omni-channel efforts, Apple Pay does present some great opportunities for them:

  • Allows retailers to offload security concerns to the consumer (rather than holding a treasure trove of credit card numbers and names).
  • Provides opportunities for MUCH more flexible mobile POS options and pop-up shops.
  • The “no card present” fees that Apple negotiated are way lower than standard retailers were able to get from financial institutions. This really moves the needle for retailers by reducing a major hard cost.
  • Provides motivation for the rest of the digital wallet marketplace to follow Apple’s lead. This should eventually lead to greater and faster consumer adoption.
  • Retailers can still use a combination of loyalty programs and proximity sensors to respond to their customers.

The Making of Future Makers

UAMaker logomark jpeg

Control Group is a proud Founding Partner of a new high school, called the Urban Assembly Maker Academy, which opened last week in our Lower Manhattan neighborhood. UA Maker is a Career and Technical Education (CTE) school with a mission to empower students to be successful, adaptive citizens through challenge-based learning and the principles of design thinking.

UA Maker’s curriculum prepares students for both college and careers by teaching them how to use design thinking and technology to solve problems. The school features a new kind of classroom experience that models aspects of the modern agile workplace so that students can develop the skills, tools, and habits of inquiry to be tomorrow’s “makers.”

Control Group got involved with UA Maker Academy because we believe that the world’s challenges require problem solvers who are equipped with both critical and creative thinking skills. They will need to be curious about the world around them and empathize with others in order to develop the best solutions for people, communities, and businesses. Beyond a textbook education, the next generation of strategists, engineers, and designers deserve exposure and experiences in tackling real world problems.

In our business, we use principles of design thinking to create successful products and experiences for our clients. By leveraging a human-focused mindset, we have a clear path and method for collaborating with stakeholders to create the most impactful solutions. In collaborating with the Urban Assembly, and an energized group of industry and higher ed representatives, an amazing group of ambitious and talented educators are providing the students with an opportunity to approach their world with empathy, confidence, and action as the backbone of their high school experience. This is just what we need to build the future.

 

Data Freedom: Part 2 of 3

Sure, it’s not exactly like Scottish independence, but I feel like William Wallace might still give us the nod for our own effort at (data) freedom.

A few weeks ago we started looking at data freedom because, while there are many advantages to using SaaS vendors, there are some issues to keep an eye on. One of those issues is finding ways to access and use the data that’s been sent out into the vendor’s system. The first installment of this series was about a small problem with a fast solution. We didn’t have to worry about real-time or frequently-changing data.

But for Vendor 2, things weren’t so easy. Like well-known #2s Art Garfunkel and Ed McMahon, Vendor 2 is easy to overlook but nonetheless necessary on a day-to-day basis. Vendor 2 is one of those internal tracking vendors we use every day with data that changes quickly and often.

Vendor 2 got the job done for us, but sadly, their reporting left something to be desired. Sure, they had reports, but there was no way to link to external data. And don’t get me started on getting it to do any complicated slicing-and-dicing. We ended up with a lot of people who needed to pull down spreadsheets and re-do the same calculations month after month. We heard the cries from people-who-will-remain-nameless (but who are me):  “I can write the darn SQL if you just let me!”

So, how did we setup a system that uses SaaS vendor data but reports the way we want it to? We setup a system to copy their information to a database we control… and then we wrote the darn SQL.

Easier said than done, for sure. For this case, we called in bigger guns and took a look at Talend, a full Enterprise Service Bus (ESB) solution that enabled us to create our own data store. The goal was to create a data store on our own terms that can auto-update as information changes on the vendor’s side in near-real time via Vendor 2’s full-featured API. Now we can do what we need with the data: write the SQL for static reports or hook up a BI tool to view it. Whatever we need.

Just that easy? Well…

TalendScreen

That “easy”

In this case, we used the Community Edition of the ESB to see what it could do. One thing we found right away was that Talend organizes things two ways: “jobs” and “routes.” The routes side is based on something Enterprise Architecture veterans will know well as Apache Camel. Working with an agreed-upon standard has its own advantages, but we also found routes to be more robust than jobs. For instance, they had an ability to handle exceptions, such as the API responding slowly, or handling cases where we needed to “page through” long sets of data. With that, we were off and running with a few hurdles to hurdle.

Nice Flow Diagrams Do Not Mean Non-Technical: Starting with a “route”, we went data object-by-object to create a parallel data model on our side so we could write the SQL and map each to a specific API call. To the uninitiated, not-so-user-friendly Camel calls look like this:

.setHeader("RowCountInPage").groovy("headers.RowCountInPage = headers.RowCountInPage.isInteger() ? headers.RowCountInPage.toInteger() : 0")

Not exactly drag-and-drop syntax. That’s a fairly simple one, actually, but even still it’s using Camel along with groovy templating — and it can only be viewed or edited via a “property” of one of those flow icons, not in a text file. The GUI aspect falls away fairly quickly.

In short, this is a case that called for real development. It’s not rocket science but also not to be taken lightly. Don’t let the nice flow diagram fool you.

An API Is A Unique Blossom (sometimes): On the Vendor 2 side of things, they do have an API, but there were no quick answers here. You can do an awful lot with a full-featured API, but it might take a while to learn how to do it, as each API is a little different. In this case, each call required crafting a specific XML structure, with a unique manner of getting large data sets by page and sometimes opaque terminology. There was no easy “getProjects()” type of call to fall back on. We were able to work our way through Vendor 2’s documentation but it also made us appreciate a solution like we designed for Vendor 1, which allowed us to avoid that level of mucking about in somebody else’s data model.

And Here You Thought Things Like Version Control Were Straightforward: Just when you thought you had git mastered and thought it’d be easy to work in a team again, along comes a centralized system like this. As it turns out, a Talend workflow isn’t just based on a few nice and editable XML files. Instead it creates sets of files and auto-updates portions of them in unpredictable ways. For instance, the updated date is part of a project file, so every save “changes” the project. Be sure to close Talend before committing your files since they change when the Studio product is running!

Talend, the company, wants you to upgrade to the paid edition to have their internal version control, but that would also mean a separate repository specifically for their tool.  In the end, we got it to work in our version control and lived to tell the tale. Unfortunately there were bumps in the road in places we thought might roll like the autobahn.

In general, Talend worked for us, but using the Community Edition wasn’t always so straightforward. For instance, going with the “routes” side of Talend skewed from Talend’s main offering in favor of the more standard Camel implementation. Using routes meant we could leverage lots of Apache Camel documentation but it cut off all sorts of Talend’s own forums and documentation, which were focused on the “jobs” side. Alas, there wasn’t an easy middle-ground to utilize the positives of both sides.

In the end, Vendor 2 was a lot more work to integrate than Vendor 1. That’s no surprise.  But, now that we have it up and running, the volume of information we’re now capturing and updating is huge. Now that we have it implemented we can write those reports however we want: Business analytics packages, home-written darn-SQL statements, etc. And the Excel re-work won’t be necessary. We did this all without touching the main functions of Vendor 2.

We took on a lot more configuration work, but now find ourselves with a full backup of our data– able to do what we want with it, not what we can. This level of integration also makes us a little less dependent on Vendor 2. Should we need to swap them out someday, we will start with all our historical data completely at the ready.

After all, even Simon and Garfunkel eventually broke up.

Summer Internship 2014: Visualizing Workplace Data

This is a post by Samuel Lalrinhlua, a student at Syracuse University in the Master of Science in Information Management (2015) program. He was also a summer intern on our Enterprise Architecture team. 

I first came across Control Group when I read the ‘Best Places to Work 2012’ list published by Crain’s. I was immediately drawn in by the photo of their StarTrek-esque hallways and thought to myself “that would be a cool place to work”.  But I never thought in a million years that I would actually get an opportunity to work for this company and would be writing about my internship experience on their blog.

When I arrived in June I was given a detailed description of the projects that I would be working on this summer: add visualizations of CG data on the monitors that hang above the Support Center and find other interesting ways to show data around the office. My fellow intern, Soohyun Park, and I were asked to collaborate and create visualizations that used and displayed dynamic data.

Conf. Room Availability

I worked on several applications, such as Talend and PostgreSQL, to extract relevant internal data such as Personal Time Off (PTO) status, work anniversaries, timesheet usage and project status, among other things. All of this data was used in creating the visualizations that are now shown on the big screens in the office.  Many of these technologies were new to me and it took some troubleshooting along the way to see results. Soohyun and I also developed an iPad visualization that displays the status of the conference rooms. Red shows “booked” and green shows “available”. App development was new to me and I learned a lot from this experience.

I am glad that I got to spend my summer with CG. I have gained an invaluable experience, both professionally and personally. Thank you all for your support– and for the coffee (I’m going to miss that!). And thank you for making me a part of the Control Group team this summer

“Live long and prosper.”

Summer Internship 2014: Designing for Scalability

This is a post by Alex Daley, a student at Elisabeth Irwin High School (Class of 2015) and a summer intern on our DevOps team.

I have been highly interested in mixed types of engineering for most of my high school career. On the hardware side, I have launched Arduinos into the stratosphere, led our robotics team to victory, and led seminars for teachers interested in 3D printing. On the software side, I have built apps and designed websites. The common theme has been that these projects have been pretty “hacked together”. I quickly built things that worked, but probably weren’t scalable and were rarely reusable.

I came to CG this summer to work with cloud services and sensor networks. I was tasked with the design and implementation of a highly-scalable, real-time sensor network. I had worked with sensors in the past, but building something on a large scale that had to be solid enough to expand was an interesting challenge. The goal was to have a number of sensors report data to a central location.

A few years ago, that central location would have probably been an SQL database. Before starting at CG, that is definitely how I would have implemented it. Instead, David introduced me to a service from Amazon called Kinesis. Kinesis is a “data stream” that allows really large amounts of data to be collected and retrieved with no latency. Not only was it immensely scalable, it was also ridiculously simple. I had no experience with Amazon Web Services when I came to CG, but I was able to get Kinesis working in a few hours. Just like that, the entire backend for the sensor network was taken care of.

For my first shot at the actual network, I started somewhere familiar: the Arduino. I was convinced that the small, inexpensive board was perfect for every application, as I had used them on everything from automatic fish feeders to weather balloons. I hooked up a temperature sensor, plugged it into a computer, and used a python script to parse the serial data and send it to a PHP site that would send the readings to Kinesis.

Arduino

If this sounds ridiculously complicated and prone to failure, that’s because it was. Like I said, I was used to hacking things together and getting them to quickly work. This setup did work. However, it was almost completely limited to this one case. If I added another sensor, I would need to modify almost every single step in the pipeline. It also only handled one type of sensor, and was completely susceptible to data corruption and had a number of bottlenecks. It wasn’t even close to what the project needed to be. But it was a start.

The first improvements I made were meant to simplify the pipeline. If I was going to have thousands of sensor locations, I shouldn’t need a laptop at every one of them to connect the sensors to the internet. There are a number of cheaper, faster, and more direct ways of getting online. In addition, the PHP gateway would have to go, as it acted as a severe handicap on the much faster Kinesis service. I would have to access Kinesis directly from the Python code.

Solving the first problem was simple. An Arduino, paired with an ethernet shield, can connect to the internet. The second problem posed a serious issue: Arduinos don’t run python. An ethernet-enabled Arduino could send requests to the PHP site all day, but writing a C library to talk directly to Kinesis was impractical, considering I had six weeks.

I went in search of another board. The Raspberry Pi is a similar price and size, but has built in ethernet and runs Python. It was perfect.

The Pi had one flaw, however. Unlike the Arduino, which has hundreds of well documented and easy to use sensors available, the Pi was harder to integrate with the physical world. I had been using the Grove System with the Arduino, a selection of sensors designed to have plug-and-play functionality. One of the key goals of the project was to give other people the capability to add onto it after I left, and the Grove System was perfect for this. However, it only worked with the Arduino. Or so I thought.

The great thing about open source hardware is that when enough demand for a new feature exists, someone in the community builds it. Such was the case with the Grove System and the Raspberry Pi, which were linked by a project called GrovePi, a shield like device that essentially acts as an Arduino. It reads the sensor values, and then translates them into data that the Pi can understand.

Raspberry Pi

I had a solution to the problems of reading data and getting it to Kinesis reliably. However, the system still lacked the efficiency and scalability that it needed. I was still just sending streams of sensor readings that didn’t have meaning to anyone who didn’t know the exact setup. There was no way to tell what type of sensor data was being sent, and whether or not the data was intact or valid. To solve this, I initially put together a basic protocol, that basically looked like this:

{1.24.2, “Temp”, 57}

This piece of data meant that the temperature sensor at 1.24.2 (more on the sensor ID system later) had a reading of 57 degrees. Once again, this system worked, but strictly within this context. It had a number of problems. Defining sensor types by strings is really unnecessary and prone to errors, a ton of extra data is in there, and if part of that update was corrupted, the code would have no idea, and send it to Kinesis anyway.

The solution to all of these problems came in the form of Protocol Buffers. Developed by Google, Protocol Buffers allow you to define data models in an external file. For example, I had a model for a sensor report, which had fields for type of sensor, reading, timestamp, and more. The file is used to encode the data, making an update only a few bytes long. After it is received, if it is intact, the data is reconstructed into the easily-accessible object.

While I was building this system, my initial feeling was that this was overkill. I thought, those few bytes didn’t really matter for what I was building, it wouldn’t make a difference. But David continued to tell me to consider the potential scale. If I had tens of thousands of sensors, it would matter.

Now that I had a super-efficient sensor sending data to the cloud, I needed to think about expandability. The first thing I did was define a few concepts that the sensor network would be built around. There would be clusters, each one based on a Raspberry Pi. Each cluster would have sensors, which were individual data producers. Clusters were each part of a network. Defining the structure let me create a protocol for identifying each sensor, that looked like this:  (Network ID).(Cluster ID).(Sensor ID)

Initially, I had configured the cluster to send a Kinesis record for every single sensor update. This meant if I wanted the state of a cluster once per second, and there were 10 sensors on the cluster, I would need 10 Kinesis requests per second, which would become impossible quickly, as our stream was limited to 1,000 write operations per second. In addition, the requests took time to send. The data size was not the bottleneck here; the actual connection was. The solution was combining sensor reports into single cluster reports, so that when a cluster wanted to send out an update, it would gather all of its data, package it up, and send it all along in one request to Kinesis. This approach saved a few tenths of a second, an amount of time I would have considered meaningless a few months ago. My early experiences with this project made me realize how critical that amount of time could be. When scaling to thousands of units, every microsecond counts.

The Raspberry Pi was initially difficult to manage, because to control it, I needed to know the IP address, which I couldn’t get without hooking it up to a display. This was not practical for configuring a large number of Pis. The solution was putting a startup script on the Pi that emailed the IP address to me every time it started up. This simple fix made SSHing easy.

The startup script option also allowed me to get the data reading and Kinesis updating to start immediately, without any commands. This allowed for literal plug-and-play functionality; after simply plugging the cluster into the wall, records appear in the Kinesis stream.

With the prototype cluster done, I turned my focus to appearance. I had never worried much about making my projects look nice, but the cluster looked much more sinister than it should have, sitting in a corner of a conference room:

Conference Room Sensor

I set out to design an enclosure. I went to staples and bought a large piece of foam board and a bottle of glue. I was thoroughly in arts and crafts territory. Using a cardboard pi case as a template, I starting cutting away until I had an enclosure that held the Pi and the sensors. It was less than attractive:

Ugly Pi Case

But that is what prototyping is all about. After deciding some changes to port locations, etc, I jumped into the absurdly simple online 3D app Tinkercad to design a 3D-printable case. Getting it printed was as simple as taking a train up to the MakerBot store in Greenwich Village and dropping off the file. A few hours later, I had a respectable, CG-themed enclosure:

CG Pi Case

After a three week break, I was back at CG to see if this thing was really as scalable as I thought. While I was gone, parts for 9 (nine!) more clusters were delivered. It was awesome:

nine sensors

The nice thing about designing with scalability in mind was that when it actually came to scaling, everything worked. I was surprised, as I had been expecting hours of troubleshooting errors resulting from expanding the network by a factor of ten. Instead, there were no major problems. If I wasn’t already sold on scalable design, I was now.

This internship was a short six weeks. As with all projects, there are a ton of ways that this could be taken further. I would have like to set up a better way to communicate with the clusters. Kinesis, by design, only allows for one-way communication, so the service cannot talk back to the clusters. A web interface with the capability to change settings on each cluster would have been awesome. In addition, I didn’t get to spend much time on the consuming side of the project. I had a simple program that displayed the realtime results, but if I had more time, I would definitely look at visualizing the data.

I’ve mentioned it before but I can’t overstate how foreign the concept of scalability was to me when I came to this job. I built things with the main intent of getting them working the quickest way possible. Scalability was a waste of time if the project was small. These six weeks have completely changed my engineering mentality. Designing for potential size, instead of current size, not only allows scaling to happen smoothly when the time comes, but also leads to more solid code overall, and is invaluable to projects of every size.

SXSW Panel Picker: Transmedia Storytelling in the Age of Proximity

SXSW Panel Picker is OPEN!

We are really excited about LTE Direct, a new mobile technology from Qualcomm that’s due to hit the market in about 15 months. So we thought it would be a great idea to partner up with Qualcomm, the very folks who created the chip, and Titan, a leader in out-of-home advertising, to explain what the opportunity will be for brands once LTE Direct is made public.

With mobile applications and proximity technologies like bluetooth beacons and soon LTE Direct (LTED), synchronized campaigns will harmonize media noise and offer deeper engagement. Customers will begin an experience on their laptop, re-engage in a public space, respond to a call to action on their phones, and connect the dots with attribution in a retail store. This chip is going to change the mobile, retail, and out of home industries forever.

So if you think this sounds like an important topic for the SXSW community, VOTE for Transmedia Storytelling in the Age of Proximity! And tell your friends too! #LTEDsxsw