The ongoing low-code/no-code movement opens up new possibilities for knowledge workers of all types. Innovative commercial platforms are enabling swifter application development capabilities, spawning a modern generation of citizen developers. These tools often incorporate visual-based programming and leverage pre-built components to create reusable workflows. And, when it comes to enterprise AI adoption, utilizing a low-code/no-code platform could also encourage a new breed of citizen data scientists.
There are many potential benefits to providing analysts with increased access to aggregated data and the ability to quickly spin up machine learning (ML) models. With such power, they could run experiments related to their core expertise and use the results to make more informed business decisions. For instance, this would help extract the true wealth of data strictly from the eyes of a technically privileged few.
However, organizations could quickly encounter pitfalls when introducing low-code/no-code into AI. Doing so correctly requires collaboration between citizen developers and heavy coders. It also requires a solid statistical base to perform analysis, as well as platform governance to ensure data privacy is upheld.
I recently met with Ed Abbo, president and CTO of C3 AI, to discover the impacts of low-code/no-code adoption within data analysis and AI. Below, we explore the potential limitations of applying low-code to this area and consider forecasts for this evolving space.
Connecting Analysts With Heavy Code
Within most organizations, they have fragmented data between hundreds or even thousands of systems. You have data fragments coming from within the organization and outside the enterprise itself, described Abbo. If data is so scattered, it’s tough to extract value from it. Thus, he sees a huge opportunity here for a common platform to aggregate relevant data and expose it to various roles within an organization.
So, where does low-code/no-code fit into the picture? Low-code could enable business analysts to dive deeper into data insights and leverage the AI artifacts produced by “heavy coders” and data scientists. A common platform helps share data and data relationships between these two groups seamlessly, without having to rewrite code or refactor data.
Syncing up pro-coders and non-programmers via low-code opens an exciting opportunity to consistently access the same data insights across an enterprise. “By doing so, you’re enabling every employee of a company to make more effective decisions,” said Abbo. Making such analytics available in a visual interface could empower hundreds of business analysts. “This area is super important and very exciting,” said Abbo. “You can allow analysts to access data securely and allow them to inform better descisions.”
Leveraging the Data of Analytics & Code
The ability to infer predictions based on historical precedent can optimize future business moves. As an example, Abbo showcased a sample workflow that ingested confirmed COVID-19 cases across various reputable sources. Aggregating these sources helps to view trends over time and produce statistical hypothesis testing. Interestingly, this sample study found an equal percent likelihood of COVID-19 contraction among highly mobile and low-mobility groups.
In a similar fashion, business units could leverage data to refine their manufacturing, distribution, and marketing projections. This could be something like an algorithm to forecast the demand of a product across different geographies. Or, it could help a financial service refine their customer engagement. Making such insights available for all citizen data scientists could help more folks test new hypotheses, thus spurring innovation.
Potential Pitfalls of Low-Code/No-Code
So, what are some limitations of applying low-code/no-code to data analytics and AI? Well, most importantly, having a solid statistical base for your analysis is very important, says Abbo. If incoming data is corrupted or biased, the analysis would draw incorrect conclusions. Other potential hiccups include:
- Not being cloud-based: Having browser-based access to web apps is essential to ensure anyone can collaborate across any device. Ensuring models are reusable could avoid reinventing the wheel for each campaign.
- Data crunching limitations: Following that point, organizations may run into computing and storage limitations if processing data on-premise. It’s thus best to seek out cloud-based tools.
- Poor collaboration: For data analyis, it hinges upon solid collaboration. So, if you can’t share projects between teams, it could limit value. This underscores the need for a shared platform.
- User interface pains: Any low-code/no-code environment is only as good as its UI. Low-code requires a modern, intuitive UI; otherwise, analysts cannot leverage it for the gains it promises.
- Data aggregation: AI has some pretty data-heavy requirements. To leverage this data, you really need a powerful, agnostic interface that allows teams to collect and unify datasets from anywhere.
Perhaps the most critical point is the collaboration aspect. If a tool does not promote collaboration between the no-code and pro-code divide, adoption could suffer. What happens if you’re not able to support both groups? What are the downsides? Well, overall, things are less efficient, says Abbo, meaning the citizen data scientists must do a lot more work to convert and prepare data when assembling fragmented data from various sources. This leads to a half-hazard, less efficient adoption.
Privacy Implications of Predictive Analysis
Privacy is another top concern for organizations leveraging their data. New reports showcase that applications are requesting (and leaking) and an excessive amount of personal data. There are also ethical grey areas around reselling personal data or leveraging it to produce highly predictive advertising campaigns. If user information is exposed incorrectly, it could break regulations and actually have real safety concerns where location data is disclosed. With any new power comes great responsibility to ensure use cases are tackled carefully to meet compliances.
So, how can data-heavy low-code/ no-code platforms avoid data privacy issues? To Abbo, it boils down to ensuring proper governance as a core capability of the development platform. This will require Role-Based Access Control (RBAC) to secure low-code usage to only those with the correct privileges. For example, C3 AI can be configured so users don’t have access to raw data; only select data objects. “It’s not your typical data lake,” says Abbo. “We need sophisticated visibility rules to protect and comply with GDPR, data privacy, and corporate data security rules.”
We’re Still Early On
It’s good to acknowledge that we’re still at an early stage, both in the adoption of low-code/no-code as well as the ongoing democratization of AI automation. “We’re really at the beginning of the low-code/no-code explosion,” said Abbo. “I anticipate there’s hundreds of millions of analysts making sub-optimal decisions because they don’t have access to the data they need.”
Empowering knowledge workers to make more effective decisions and improving their collaboration with “heavy coders” will only improve enterprise AI. This has the potential to activate more data inside a company and apply algorithms to make more intelligent forecasts. To sum it up, we’re in the “early innings of very promising development,” said Abbo.