5 key trends shaping the future of AI: Synthetic data, automation, LLMs, no-code solutions, and environmental impact

By Spiros Potamitis, Senior Analytics Product Manager, SAS

Synthetic data will get a lot of traction as organisations face tighter regulations and sharing sensitive data across borders becomes more challenging. Synthetic data can capture the statistical properties of the original data source with high accuracy to overcome regulatory barriers and unlock innovation for organisations.

Automated machine learning (AutoML) has already been successfully implemented by multiple organisations, allowing data scientists to develop powerful models with the click of a button. This level of automation will expand in other ModelOps areas such as data engineering, monitoring, and retraining. It will slowly cover the end-to-end analytics lifecycle with simplicity and ease. Large language models (LLMs) often claim to be open source but have various limitations when it comes to applying them at scale. Unfortunately, only certain tech giants can bear the cost of training such models. True democratisation of LLMs will remain a far-fetched dream.

The era where everything needs to be a little ‘hacky’ and understood by the very few to be considered a powerful and sophisticated solution will come to an end. No-code AI solutions will prove that true sophistication comes from simplicity and bring tangible value to organisations. The cloud has accelerated unparalleled innovation in AI. However, all that computing has a greater carbon footprint than the airline industry – up to 3.7% of global carbon emissions. For example, one day of running ChatGPT today costs $700,000, and that figure will only increase. As organisations continue investing in digital services and the cloud, the environmental cost of those services and the ESG policies of hyperscalers will play an increasingly important role in future investing decisions.

ChatGPTCloudtechnology
Comments (0)
Add Comment