diff --git a/docs/contents/frameworks/frameworks.html b/docs/contents/frameworks/frameworks.html index 007a0866..856c67f4 100644 --- a/docs/contents/frameworks/frameworks.html +++ b/docs/contents/frameworks/frameworks.html @@ -1411,7 +1411,7 @@

6.9 Choosing the Right Framework

Choosing the right machine learning framework for a given application requires carefully evaluating models, hardware, and software considerations. By analyzing these three aspects—models, hardware, and software—ML engineers can select the optimal framework and customize it as needed for efficient and performant on-device ML applications. The goal is to balance model complexity, hardware limitations, and software integration to design a tailored ML pipeline for embedded and edge devices.

-
+
@@ -1427,7 +1427,7 @@

6.9.2 Software

-
+
@@ -1441,7 +1441,7 @@

6.9.3 Hardware

-
+
@@ -1488,7 +1488,7 @@

6.10.1 Decomposition

Currently, the ML system stack consists of four abstractions as shown in Figure 6.10, namely (1) computational graphs, (2) tensor programs, (3) libraries and runtimes, and (4) hardware primitives.

-
+
diff --git a/docs/contents/introduction/introduction.html b/docs/contents/introduction/introduction.html index 823c5f1b..fd7c89df 100644 --- a/docs/contents/introduction/introduction.html +++ b/docs/contents/introduction/introduction.html @@ -496,14 +496,14 @@

-
DALL·E 3 Prompt: A detailed, rectangular, flat 2D illustration depicting a roadmap of a book’s chapters on machine learning systems, set on a crisp clean white background. The image features a winding road traveling through various symbolic landmarks. Each landmark represents a chapter topic: Introduction, ML Systems, Deep Learning, AI Workflow, Data Engineering, AI Frameworks, AI Training, Efficient AI, Model Optimizations, AI Acceleration, Benchmarking AI, On-Device Learning, Embedded AIOps, Security & Privacy, Responsible AI, Sustainable AI, AI for Good, Robust AI, Generative AI. The style is clean, modern, and flat, suitable for a technical book, with each landmark clearly labeled with its chapter title.
+
DALL·E 3 Prompt: A detailed, rectangular, flat 2D illustration depicting a roadmap of a book’s chapters on machine learning systems, set on a crisp, clean white background. The image features a winding road traveling through various symbolic landmarks. Each landmark represents a chapter topic: Introduction, ML Systems, Deep Learning, AI Workflow, Data Engineering, AI Frameworks, AI Training, Efficient AI, Model Optimizations, AI Acceleration, Benchmarking AI, On-Device Learning, Embedded AIOps, Security & Privacy, Responsible AI, Sustainable AI, AI for Good, Robust AI, Generative AI. The style is clean, modern, and flat, suitable for a technical book, with each landmark clearly labeled with its chapter title.
-

In the early 1990s, Mark Weiser, a pioneering computer scientist, introduced the world to a revolutionary concept that would forever change how we interact with technology. He envisioned a future where computing would be so seamlessly integrated into our environments that it would become an invisible, integral part of daily life. This vision, which he termed “ubiquitous computing,” promised a world where technology would serve us without demanding our constant attention or interaction. Fast forward to today, and we find ourselves on the cusp of realizing Weiser’s vision, thanks to the advent and proliferation of machine learning systems.

-

Ubiquitous computing (Weiser 1991), as Weiser imagined, is not merely about embedding processors in everyday objects; it is about imbuing our environment with a form of intelligence that anticipates our needs and acts on our behalf, enhancing our experiences without our explicit command. The key to this ubiquitous intelligence lies in developing and deploying machine learning systems at the edge of our networks.

+

In the early 1990s, Mark Weiser, a pioneering computer scientist, introduced the world to a revolutionary concept that would forever change how we interact with technology. He envisioned a future where computing would be seamlessly integrated into our environments, becoming an invisible, integral part of daily life. This vision, which he termed “ubiquitous computing,” promised a world where technology would serve us without demanding our constant attention or interaction. Fast forward to today, and we find ourselves on the cusp of realizing Weiser’s vision, thanks to the advent and proliferation of machine learning systems.

+

The concept of ubiquitous computing (Weiser 1991), as envisioned by Weiser, involves more than just incorporating processors into ordinary objects. It revolves around infusing our surroundings with a kind of intelligence that can predict our requirements and take action on our behalf, enhancing our experiences without us having to issue explicit commands. The crucial element of this pervasive intelligence is developing and implementing machine learning systems at the edge of our networks.

Weiser, Mark. 1991. “The Computer for the 21st Century.” Sci. Am. 265 (3): 94–104. https://doi.org/10.1038/scientificamerican0991-94. -

Machine learning, a subset of artificial intelligence, enables computers to learn from and make decisions based on data rather than following explicitly programmed instructions. When deployed at the edge—closer to where data is generated, and actions are taken—machine learning systems can process information in real-time, responding to environmental changes and user inputs with minimal latency. This capability is critical for applications where timing is crucial, such as autonomous vehicles, real-time language translation, and smart healthcare devices.

+

Machine learning, a subset of artificial intelligence, enables computers to learn from and make decisions based on data rather than following explicitly programmed instructions. When deployed at the edge, closer to data generation and action, these systems can process information in real time with minimal latency. This is critical for TinyML applications, where fast response is crucial, such as autonomous vehicles, real-time translation, and smart healthcare devices.

The migration of machine learning from centralized data centers to the edge of networks marks a significant evolution in computing architecture. The need for speed, privacy, and reduced bandwidth consumption drives this shift. By processing data locally, edge-based machine learning systems can make quick decisions without constantly communicating with a central server. This speeds up response times, conserves bandwidth, and enhances privacy by limiting the amount of data transmitted over the network.

Moreover, the ability to deploy machine learning models in diverse environments has led to an explosion of innovative applications. From smart cities that optimize traffic flow in real-time to agricultural drones that monitor crop health and apply treatments precisely where needed, machine learning at the edge enables a level of contextual awareness and responsiveness that was previously unimaginable.

Despite the promise of ubiquitous intelligence, deploying machine learning systems at the edge is challenging. These systems must operate within the constraints of limited processing power, memory, and energy availability, often in environments far from the controlled conditions of data centers. Additionally, ensuring the privacy and security of the data in these systems processes is paramount, particularly in applications that handle sensitive personal information.

diff --git a/docs/search.json b/docs/search.json index b1ae42e8..710d16a8 100644 --- a/docs/search.json +++ b/docs/search.json @@ -196,7 +196,7 @@ "href": "contents/introduction/introduction.html", "title": "1  Introduction", "section": "", - "text": "DALL·E 3 Prompt: A detailed, rectangular, flat 2D illustration depicting a roadmap of a book’s chapters on machine learning systems, set on a crisp clean white background. The image features a winding road traveling through various symbolic landmarks. Each landmark represents a chapter topic: Introduction, ML Systems, Deep Learning, AI Workflow, Data Engineering, AI Frameworks, AI Training, Efficient AI, Model Optimizations, AI Acceleration, Benchmarking AI, On-Device Learning, Embedded AIOps, Security & Privacy, Responsible AI, Sustainable AI, AI for Good, Robust AI, Generative AI. The style is clean, modern, and flat, suitable for a technical book, with each landmark clearly labeled with its chapter title.\n\n\nIn the early 1990s, Mark Weiser, a pioneering computer scientist, introduced the world to a revolutionary concept that would forever change how we interact with technology. He envisioned a future where computing would be so seamlessly integrated into our environments that it would become an invisible, integral part of daily life. This vision, which he termed “ubiquitous computing,” promised a world where technology would serve us without demanding our constant attention or interaction. Fast forward to today, and we find ourselves on the cusp of realizing Weiser’s vision, thanks to the advent and proliferation of machine learning systems.\nUbiquitous computing (Weiser 1991), as Weiser imagined, is not merely about embedding processors in everyday objects; it is about imbuing our environment with a form of intelligence that anticipates our needs and acts on our behalf, enhancing our experiences without our explicit command. The key to this ubiquitous intelligence lies in developing and deploying machine learning systems at the edge of our networks.\n\nWeiser, Mark. 1991. “The Computer for the 21st Century.” Sci. Am. 265 (3): 94–104. https://doi.org/10.1038/scientificamerican0991-94.\nMachine learning, a subset of artificial intelligence, enables computers to learn from and make decisions based on data rather than following explicitly programmed instructions. When deployed at the edge—closer to where data is generated, and actions are taken—machine learning systems can process information in real-time, responding to environmental changes and user inputs with minimal latency. This capability is critical for applications where timing is crucial, such as autonomous vehicles, real-time language translation, and smart healthcare devices.\nThe migration of machine learning from centralized data centers to the edge of networks marks a significant evolution in computing architecture. The need for speed, privacy, and reduced bandwidth consumption drives this shift. By processing data locally, edge-based machine learning systems can make quick decisions without constantly communicating with a central server. This speeds up response times, conserves bandwidth, and enhances privacy by limiting the amount of data transmitted over the network.\nMoreover, the ability to deploy machine learning models in diverse environments has led to an explosion of innovative applications. From smart cities that optimize traffic flow in real-time to agricultural drones that monitor crop health and apply treatments precisely where needed, machine learning at the edge enables a level of contextual awareness and responsiveness that was previously unimaginable.\nDespite the promise of ubiquitous intelligence, deploying machine learning systems at the edge is challenging. These systems must operate within the constraints of limited processing power, memory, and energy availability, often in environments far from the controlled conditions of data centers. Additionally, ensuring the privacy and security of the data in these systems processes is paramount, particularly in applications that handle sensitive personal information.\nDeveloping machine learning models that are efficient enough to run at the edge while delivering accurate and reliable results requires innovative model design, training, and deployment approaches. Researchers and engineers are exploring techniques such as model compression, federated learning, and transfer learning to address these challenges.\nAs we stand on the threshold of Weiser’s vision of ubiquitous computing, machine learning systems are clear as the key to unlocking this future. By embedding intelligence in the fabric of our environment, these systems have the potential to make our interactions with technology more natural and intuitive than ever before. As we continue to push the boundaries of what’s possible with machine learning at the edge, we move closer to a world where technology quietly enhances our lives without ever getting in the way.\nIn this book, we will explore the technical foundations of machine learning systems, the challenges of deploying these systems at the edge, and the vast array of applications they enable. Join us as we embark on a journey into the future of ubiquitous intelligence, where the seamless integration of technology into our daily lives transforms the essence of how we live, work, and interact with the world around us.", + "text": "DALL·E 3 Prompt: A detailed, rectangular, flat 2D illustration depicting a roadmap of a book’s chapters on machine learning systems, set on a crisp, clean white background. The image features a winding road traveling through various symbolic landmarks. Each landmark represents a chapter topic: Introduction, ML Systems, Deep Learning, AI Workflow, Data Engineering, AI Frameworks, AI Training, Efficient AI, Model Optimizations, AI Acceleration, Benchmarking AI, On-Device Learning, Embedded AIOps, Security & Privacy, Responsible AI, Sustainable AI, AI for Good, Robust AI, Generative AI. The style is clean, modern, and flat, suitable for a technical book, with each landmark clearly labeled with its chapter title.\n\n\nIn the early 1990s, Mark Weiser, a pioneering computer scientist, introduced the world to a revolutionary concept that would forever change how we interact with technology. He envisioned a future where computing would be seamlessly integrated into our environments, becoming an invisible, integral part of daily life. This vision, which he termed “ubiquitous computing,” promised a world where technology would serve us without demanding our constant attention or interaction. Fast forward to today, and we find ourselves on the cusp of realizing Weiser’s vision, thanks to the advent and proliferation of machine learning systems.\nThe concept of ubiquitous computing (Weiser 1991), as envisioned by Weiser, involves more than just incorporating processors into ordinary objects. It revolves around infusing our surroundings with a kind of intelligence that can predict our requirements and take action on our behalf, enhancing our experiences without us having to issue explicit commands. The crucial element of this pervasive intelligence is developing and implementing machine learning systems at the edge of our networks.\n\nWeiser, Mark. 1991. “The Computer for the 21st Century.” Sci. Am. 265 (3): 94–104. https://doi.org/10.1038/scientificamerican0991-94.\nMachine learning, a subset of artificial intelligence, enables computers to learn from and make decisions based on data rather than following explicitly programmed instructions. When deployed at the edge, closer to data generation and action, these systems can process information in real time with minimal latency. This is critical for TinyML applications, where fast response is crucial, such as autonomous vehicles, real-time translation, and smart healthcare devices.\nThe migration of machine learning from centralized data centers to the edge of networks marks a significant evolution in computing architecture. The need for speed, privacy, and reduced bandwidth consumption drives this shift. By processing data locally, edge-based machine learning systems can make quick decisions without constantly communicating with a central server. This speeds up response times, conserves bandwidth, and enhances privacy by limiting the amount of data transmitted over the network.\nMoreover, the ability to deploy machine learning models in diverse environments has led to an explosion of innovative applications. From smart cities that optimize traffic flow in real-time to agricultural drones that monitor crop health and apply treatments precisely where needed, machine learning at the edge enables a level of contextual awareness and responsiveness that was previously unimaginable.\nDespite the promise of ubiquitous intelligence, deploying machine learning systems at the edge is challenging. These systems must operate within the constraints of limited processing power, memory, and energy availability, often in environments far from the controlled conditions of data centers. Additionally, ensuring the privacy and security of the data in these systems processes is paramount, particularly in applications that handle sensitive personal information.\nDeveloping machine learning models that are efficient enough to run at the edge while delivering accurate and reliable results requires innovative model design, training, and deployment approaches. Researchers and engineers are exploring techniques such as model compression, federated learning, and transfer learning to address these challenges.\nAs we stand on the threshold of Weiser’s vision of ubiquitous computing, machine learning systems are clear as the key to unlocking this future. By embedding intelligence in the fabric of our environment, these systems have the potential to make our interactions with technology more natural and intuitive than ever before. As we continue to push the boundaries of what’s possible with machine learning at the edge, we move closer to a world where technology quietly enhances our lives without ever getting in the way.\nIn this book, we will explore the technical foundations of machine learning systems, the challenges of deploying these systems at the edge, and the vast array of applications they enable. Join us as we embark on a journey into the future of ubiquitous intelligence, where the seamless integration of technology into our daily lives transforms the essence of how we live, work, and interact with the world around us.", "crumbs": [ "MAIN", "1  Introduction"