In the world of machine learning (ML) and artificial intelligence (AI), foundational models (FMs) are revolutionizing the way AI is developed and applied. These models, trained on a broad set of unlabeled data, can be adapted to a wide range of tasks and applications. By combining the power of FMs with edge computing, enterprises can run AI workloads for FM fine-tuning and inferencing at the operational edge, enabling faster deployment, near-real-time predictions, and reduced costs.
What are Foundational Models (FMs)?
FMs are AI models trained on large amounts of unlabeled data, enabling them to learn more generally and work across domains and problems. Unlike traditional AI models that specialize in specific tasks within a single domain, FMs serve as a foundation for a multitude of applications. They address the challenges of scaling AI adoption by ingesting vast amounts of unlabeled data and using self-supervised techniques for training, eliminating the need for extensive human labeling and annotation.
How Do Large Language Models (LLMs) Fit into Foundational Models?
Large language models (LLMs) are a class of FMs that consist of neural networks trained on massive amounts of unlabeled data. These models can perform various natural language processing (NLP) tasks and have become integral to AI applications. LLMs learn to understand language in a way similar to humans, making them invaluable in a wide range of industries.
Scaling AI at the Edge
Deploying AI models at the edge, where data is generated and work is performed, allows for near-real-time predictions while maintaining data sovereignty and privacy. However, scaling AI deployments at the edge comes with challenges such as deployment time and cost, as well as day-to-day management of numerous edge locations.
IBM has developed an edge architecture that addresses these challenges by introducing an integrated hardware/software (HW/SW) appliance model. This model enables zero-touch provisioning of software, continuous monitoring of edge system health, and centralized management of software, security, and configuration updates. The architecture follows a hub-and-spoke deployment configuration, with a central cloud acting as the hub and edge-in-a-box appliances serving as spokes at each edge location.
The Value of FM Fine-Tuning and Inference at the Edge
By pre-training FMs in the cloud and fine-tuning them for specific downstream tasks at the edge, enterprises can optimize the use of compute resources and reduce operational costs. Fine-tuning requires fewer labeled data samples and can be performed using a few GPUs at the enterprise edge, allowing sensitive data to stay within the operational environment. Serving the fine-tuned AI model at the edge also reduces latency and data transfer costs.
To demonstrate the value of FM fine-tuning and inference at the edge, IBM deployed a vision-transformer-based FM for civil infrastructure on a three-node edge cluster. This deployment showcased the reduction in time-to-insight and cost associated with defect detection using drone imagery inputs.
Summary
Combining IBM watsonx data and AI platform capabilities with edge-in-a-box appliances empowers enterprises to run AI workloads at the edge, reducing deployment time, enabling near-real-time predictions, and optimizing resource utilization. FM fine-tuning and inference at the edge provide significant advantages in terms of reduced latency, data transfer costs, and improved security. With FMs and edge computing, the possibilities for AI deployments are expanding, opening doors to faster and more efficient AI adoption across industries.
FAQs
What are foundational models (FMs)?
Foundational models (FMs) are AI models that are trained on a broad set of unlabeled data, allowing them to be adapted to various downstream tasks and applications. Unlike traditional AI models, FMs learn more generally and work across domains and problems.
What are large language models (LLMs)?
Large language models (LLMs) are a class of foundational models that consist of neural networks trained on massive amounts of unlabeled data. These models excel in natural language processing (NLP) tasks and have become integral to AI applications.
What is the advantage of running AI workloads at the edge?
Running AI workloads at the edge enables near-real-time predictions while abiding by data sovereignty and privacy requirements. It reduces latency and data transfer costs, allowing for faster and more efficient data analysis and insights.
How does edge-in-a-box architecture facilitate AI deployments at the edge?
The edge-in-a-box architecture provides an integrated hardware/software (HW/SW) appliance model for AI deployments at the edge. It enables zero-touch provisioning, continuous monitoring of edge system health, and centralized management of software, security, and configuration updates, making AI deployments at the edge more scalable and efficient.
How does fine-tuning AI models at the edge reduce operational costs?
Fine-tuning AI models at the edge reduces the time required and data transfer costs associated with inferencing tasks. It allows sensitive data to stay within the operational environment and optimizes resource utilization by utilizing a few GPUs at the enterprise edge instead of extensive cloud compute resources.
More in this category ...
Six tips for an exceptional customer service strategy
Data Monetization Strategies: Unleashing the Potential of Your Data Assets
Successful Beta Service launch of SOMESING, ‘My Hand-Carry Studio Karaoke App’

Coinbase unveils global, instant money transfers via popular messaging and social platforms
Decentralized Identity Management: The Power of Blockchain in Government
BitMEX Collaborates with PowerTrade to Introduce New Crypto Products for Traders
Reskilling your workforce in the time of AI
Assemblyman Proposes Bill to Regulate Digital Assets as Securities
ORDI worth hits new all-time top as Bitcoin touches $42k
Societe Generale Launches Inaugural Digital Green Bond on Ethereum Blockchain
Bitcoin skyrockets to $44,000 as bulls brush bears apart
DWF Labs Invests Additional $1.25M in FLOKI to Support the Ecosystem
TokenFi (TOKEN) worth is up 48% as of late: Here’s why
Retailers can faucet into generative Computational Intelligence to beef up reinforce for patrons and staff
Record-Breaking Inflows in Crypto Investment Products Echo 2021 Bull Run

Big Data and Analytics: Driving Efficiency in the Digital Supply Chain
Jellyverse secures $2 million seed round to build DeFi 3.0
A guide to efficient Oracle implementation
From Fiat to Crypto: Exploring the Role of Regulated Exchanges in Digital Asset Adoption
Top crypto picks to buy at rising market before it’s too late
Core Scientific explains its latest bankruptcy plan ahead of court date

Enhancing Privacy with Zero-Knowledge Proofs: The Power of Privacy-Focused Blockchains
Riot purchases BTC miners worth $290M from MicroBT
The Importance of Supply Chain Optimization in Today’s Business Environment
Standard Chartered Zodia integrates Ripple-owned Metaco’s crypto storage services
Web 3.0: The Internet of Value and Smart Contracts
Crypto Executives Predict Bull Run for Bitcoin in 2024, Others Disagree
Comparing Traditional and Decentralized Storage: What You Need to Know
Empowering Security Analysts: Strategies to Maximize Productivity and Efficiency
Bitcoin tops $40K for first time in 19 months, Matrixport tips $125K in 2024
