By ensuring high reliability, high quality and resilience for Internet of Things (IoT) deployments, your projects won’t fail at scale.
With an IoT project, making the leap from proof of concept to a production environment at scale can be tricky and time-consuming. According to Beecham Research in the report “Why IoT Projects Fail,” 60% of respondents trying to build an IoT solution had problems with scalability.
In MachNation’s Performance IoT podcast I talked with Josh Taubenheim about how you can assure reliability in your IoT solutions by ruthlessly adopting best practices.
Here’s a sample of our discussion.
Taubenheim: When we here at MachNation think about the steps involved in executing an IoT implementation and the iteration involved in completing phased rollouts of new functionality, we generally picture it in a design, develop and test workflow. Is that in alignment with the processes Software AG implements to ensure high-performance and reliability in its customer deployments?
Me: At Software AG, we follow an evolution of this workflow with continuous design, development, integration, and deployment, with testing built into each stage. This allows us to ensure not just the product’s correct functional operation, but also the non-functional elements such as performance, robustness, scalability, etc.
Taubenheim: What are some things you would describe as best-in-class tactics throughout these processes? What do you have to get right to ensure you’re providing the best customer implementation?
Me: In one way, it’s just getting the basics right. Working as a partner to the customer in one team, with clear communications, proactively solving problems, and working to jointly agreed milestones. However, the larger and more complex the project becomes, the more difficult it is to achieve this without a formal structure. Strong PMI-based project management is ideal here.
Taubenheim: Do you have any sort of internal baseline benchmarks that all customer implementations need to meet to consider a deployment highly resilient or is there some nuance into every situation that needs to be considered (i.e., are there static numbers used for required benchmarks for performance)?
Me: We have targets for both the operational performance of the base product and the overall solution of which it is a component part.
- For the base product, we conduct tests of the functional operation of the software modules in their deployed configuration; operational tests to ensure monitoring and alerting; backup and recovery tests in the event of a failure condition; and an assessment against performance targets.
- For the overall solution, the assessment of the deployment’s resiliency is both unique to the customer deployment and typically expensive to conduct, as it requires an identical deployment to be created and heavily loaded with high numbers of devices and transactions.
To listen to the entire podcast, please click below.