The Internet of Things has been around, at least conceptually, for some time now. In practice, we’re seeing the first real surge of affordable devices hitting the market in the last year or two. This comes in the form of everything from smart thermostats to hardware based virtual assistants like the Amazon Echo.
Surprisingly, the Federal Trade Commission put out guidance on security for IoT as far back as January 2015. While not exactly a technical deep dive, it does have some sound advise. Basic things like encrypt communication, and salt hashed data. Perhaps design a device with authentication in mind. Basically, take what we’ve learned over the past decades of desktop and mobile security, and apply that to IoT.
It’s too bad D-Link didn’t get that memo. The FTC has filed a lawsuit against them, claiming their security claims were deceptive to consumers. The complaint has many legitimate security flaws, but the most glaring and inexplicable was hard-coded login credentials on devices. I genuinely hope no one thought this was even remotely secure, but rather an oversight that held over an ancient security policy from pre-Internet days. It still doesn’t make it excusable, but at least I can understand that.
The FTC isn’t just wheeling out the sticks to get better IoT security, they’re also trying to use some carrots. They recently launch the IoT Home Inspector Challenge for the best tool to protect consumers from vulnerabilities in IoT. The top prize for a solution is $25,000.
What I think this problem comes down to is proper visualization. Many consumers setup devices within their homes, with no real idea of what happens to the data collected. Most people just want to do the initial configuration, and as long as they see it working, there’s no issue. The challenge here becomes how do you simply show consumers what devices are on their network, and how those devices are accessing the wider Internet.
Luckily, the enterprise has become supremely capable at doing great visualization. When I attended Networking Field Day in November, just about every presenting company showed off how robust and capable their visualization tools were. All of them had great browser based dashboards, and provided graphical configuration options. These same type of tools are needed for consumers.
Now there are several issues with this. For one, most consumers really don’t want to fool around with extensive configuration. A network engineer is motivated to do so because it’s their job. A consumer often prioritizes simplicity, considering security a secondary issue.
The other major issue is how you deal with multiple platforms. If an organization has problems working across multiple vendors, the number of potential configurations exponentially increases on the consumer side.
Then there is the problem of who will propose this solution. In the enterprise, these abstraction management layers are brought in by third-parties. But who would be the companies that would bring this to a consumer level? They would need to have the resources to account for the huge variety of hardware. Google and Amazon are out because they are competitors in the space. Apple would probably only want to work with their specific products, or within their HomeKit ecosystem. I originally thought this would be a perfect extension of PC security companies. But they have such a bad user experience reputation with consumers that it would probably suffer from poor adoption. I actually think this would be a perfect opportunity for someone like Microsoft, who’s clearly interested in working cross-platform these days, has good brand recognition, and is used to a morass of disparate hardware support.
But what would this actually look like? Any of the great visualization tools I saw from SolarWinds, Forward Networks, or Apstra would be a good starting point in principal. Essentially, this kind of visualization would need to show all devices on the network, what type of device it was, what information was going out onto the internet, and how that information was secured. The last part is relatively simply, the solution could use a classic red-yellow-green scheme for comprehension. It’s getting all of this automated that becomes the issue. We saw that a company like Forward Networks was able to model hardware configurations across a wide variety of companies for their software network model. I saw from SolarWinds that a hardware based poller can get extraordinarily detailed network information (admittedly on much more sophisticated networks). And what I saw from Apstra showed that intentionality can be factored into how these networks are designed and function.
Oddly, I think the relative simplicity of consumer networks will be the biggest tripping point. It’s not like in an enterprise setting ,where often you’re worried about traffic hitting specific servers for compliance or performance reasons. Most of these devices are going directly to a Internet connected router. And fundamentally, if manufacturers are going to make obvious security flaws baked into product, there’s not much a consumer can do. But enterprise grade visualization could at least give consumers some context, and actually let them make more information decisions. As IoT increasing enters more personal and sensitive spaces in our lives, companies can’t expect mass consumer adoption without some kind of network visibility. Of course, it’s one thing to create a compelling solution, it’s another to monetize it.
- VMware NSX: Going Big with Micro-Segmentation - May 23, 2017
- DNA Storage is Weird - May 23, 2017
- NetApp and Open Source - May 23, 2017
- What is Big Data? The On-Premise IT Roundtable - May 23, 2017
- NAS Effect: 10TB Western Digital Red Drives - May 22, 2017
- Intel NFV, an SD-WAN Cook-Off, and a Missing Control Plane in Gestalt Networking News 17.6 - May 22, 2017
- “Big Data” Isn’t a Thing - May 19, 2017
- Managed Storage with ClearSky Data - May 19, 2017
- Microsoft Opening Data Centers in Africa - May 18, 2017
- Datrium And Open Convergence - May 18, 2017