MENUMENU
BKR Energy platform is an advanced hybrid system that allows you to maximize your energy savings, while contributing to a greener and more sustainable future.
Let’s take a look at new approaches to the challenges that come with inserting the world’s most energy-draining technology into lightweight, energy-efficient devices and networks.
In the fast-evolving world of business, small businesses often face the challenge of competing with larger companies that have more resources. However, recent advancements in artificial intelligence (AI) have leveled the playing field. Small businesses now have access to powerful tools that were once reserved for large corporations. If you’re a small business owner looking to grow, enhance efficiency, or make smarter decisions, AI can be a game-changer.
In this blog, we’ll explore how small businesses can benefit from AI and provide actionable tips for integrating AI into your daily operations.
One of the most popular and accessible applications of AI for small businesses is in customer support. Chatbots powered by AI can provide 24/7 support, answer frequently asked questions, and even handle common issues or requests. By automating these tasks, businesses can improve customer satisfaction while reducing the workload on human agents.
Examples:
Benefits:
AI can help small businesses optimize their marketing efforts by automating repetitive tasks and offering data-driven insights. AI-powered marketing tools can analyze customer behavior, segment audiences, personalize content, and even automate social media posting.
Examples:
Benefits:
Customer Relationship Management (CRM) systems have come a long way with the incorporation of AI. AI-powered CRMs can analyze customer data, predict purchasing behavior, and recommend the best actions for increasing sales.
Examples:
Benefits:
Managing inventory can be a tedious task for small businesses, especially if you deal with seasonal or fluctuating demand. AI can optimize inventory by predicting demand based on historical data, current trends, and external factors like weather or holidays.
Examples:
Benefits:
AI can assist small businesses in managing finances by predicting cash flow, optimizing budgeting, and identifying potential savings opportunities. Financial AI tools can analyze past spending habits and future revenue streams to create more accurate financial projections.
Examples:
Benefits:
HR tasks can be time-consuming for small businesses, especially when it comes to recruiting, training, and performance management. AI tools can assist in automating many HR functions, from resume screening to employee performance analysis.
Examples:
Benefits:
AI can help small businesses detect and prevent fraud by analyzing transaction patterns, monitoring for suspicious activity, and flagging potential risks in real-time.
Examples:
Benefits:
AI is a powerful tool for enhancing e-commerce experiences by providing personalized product recommendations based on browsing history, preferences, and previous purchases.
Examples:
Benefits:
The future of small business is intertwined with AI. Whether you are looking to improve customer service, boost sales, streamline operations, or make smarter decisions, AI offers tools that can help. The best part is that AI has become more affordable and accessible, even for businesses with limited resources.
By leveraging AI strategically, small businesses can enhance their efficiency, improve their customer experiences, and stay competitive in an increasingly digital world. Start small, experiment with AI tools, and watch your business thrive!
Want to explore AI tools for your small business? Let us know in the comments or reach out to discover more!
Over the last five decades, Moore’s Law correctly predicted that every 18 months computer processing power would approximately double for end users. Unfortunately, there’s now general agreement3 that this particular facet of the tech revolution is exhausted. Most new improvements in this respect involve increasing power levels, which is particularly obstructive for the development of ‘local’ and mobile machine learning4.
There remain physical reasons why small devices can’t easily become more powerful and useful than they are at the current state-of-the-art without recourse to cloud services (which come with the latency and security issues of a centralized, network-based approach).
All possible solutions involve some kind of sacrifice:
Anyone over 30 understands these restrictions instinctively, remembering the many incremental sacrifices we made as we gradually transitioned from the more powerful desktop-based computing of the 1990s to the less powerful — but more available — mobile devices of the smartphone revolution more than a decade ago.
Over the last ten years, global interest has grown in developing decentralized, semi-centralized and federated monitoring and regulatory systems in areas and sectors where even basic network infrastructure is poor or non-existent.
Besides the 47% of the world that has no internet access at all9, there are a number of physically isolated sectors that have always had to depend on expensive dedicated and static technologies — equipment which was shipped to remote locations and required to operate independently in situ; without updates, without communication with wider business operations networks, and without the commercial advantages of high capacity data analysis resources.
These isolated sectors have included agriculture and deep sea and shipping operations, as well as architectural integrity monitoring projects and military placements — all of which are either frequently operating with limited or zero network coverage, or else have security issues that preclude the constant use of cloud-based IoT infrastructure.
In these cases, it would be useful if the local sensor assemblies could actually do some of the hard work in processing the data through machine learning systems — especially since the data stream of each one is usually very specific and not always suited for generically trained AI models, as we shall see.
An IoT model typically features a sensor transmitting data over the internet to a central processing infrastructure elsewhere on the internet. In most cases, the sensor apparatus itself has a limited or zero processing capacity, being a ‘dumb’ data capture device capable of sending data to the network and encrypting it if necessary.
Since a primary appeal of IoT infrastructure is the low cost and easy mobility of such sensors, the sensor apparatus may also have little or no storage capacity and lack a hard-wired mains power source. In most cases, data will also be lost for the duration of any local network outages.
If the sensor must communicate data to the cloud over 3, 4 or 5G, it will experience a rising scale of energy consumption10; if the information is encrypted, higher still11. This means period manual battery replacement, or else the use of more innovative renewable power sources such as solar12, wind13, ambient vibrations14, or even radio waves15.
This IoT configuration is most likely where the sensor is isolated, such as a traffic-monitoring computer vision camera in a long and remote stretch of road16, or an installation to monitor for cracks in a building or bridge17.
In 2011, Cisco coined the term ‘fog computing’ to indicate a more localized data-gathering/analysis framework18.
A fog deployment is effectively the same as an IoT deployment (see above), except that the sensor transmits data to a local hub rather than over the internet to a central server. The hub itself may transmit processed or gathered data periodically back to the cloud and will have more power and computing resources than the sensor equipment.
Fog computing is most suited to a large group of low-power sensors that contribute their data input to a more resilient and capable local machine; an example would be an array of agricultural sensors dispersed over a harvest field, with a nearby hub collating and curating the various data streams.
This improved proximity means that weaker but more power-efficient local communication protocols such as Bluetooth BLE can often be used by the sensors to send information19.
Negatively, fog architectures are unsuited for highly isolated sensors outside the narrow ranges of these low-powered network protocols.
If you live in a major city and use the services of a global media streamer such as Netflix, chances are you’re already benefiting from a type of fog computing, since Netflix is one of a limited number of tech giants with enough capitalization to create hardware caching installations within the infrastructure of the world’s most popular ISPs.
The company’s Open Connect program20 is edge caching on steroids: according to demand, such portions of the media catalogue as are available to the host country are selectively copied over to a node inside the ISP’s network — which means that the content you’re watching is not being streamed from the internet at all but effectively from your own local network, similar to home-streaming applications such as Plex.
True to the fog paradigm, the ‘local’ Open Connect installation sends back collated data on user behavior and other statistics via AWS to a larger central server21 in order to derive new insights for the company’s extensive machine learning research program22.
In ‘edge’ IoT, the sensor or local equipment is powerful enough to process data directly, either through its on-board hardware or a hard-wired connection to a capable unit.
Predictably, power consumption and deployment and maintenance costs rise rapidly when an IoT unit becomes this sophisticated, making the edge IoT proposition a hard trade-off between cost and capability.
Edge computing is suitable for cases where autonomy is critical. This includes the development of self-driving vehicles, which depend on low-latency response times and cannot afford to wait for network interactions23; vehicle traffic monitoring24; SCADA and other industrial frameworks25; healthcare26; and urban and domestic monitoring27, among other sectors.
In terms of deployment logistics and cost, it could be argued that edge computing is an effective return to the pre-IoT era of expensive and dedicated bespoke monitoring and data analysis systems, with a little periodic cloud exchange and machine learning added.
However, the long-term business appeal of AIoT may lie in a more creative use of existing resources, technologies and infrastructure, to enable AI-based technologies to operate effectively and economically at a local level.
We saw in our recent article on Apple’s Core ML that it’s not only possible but preferable to train a machine learning model locally, so that it can adapt to the patterns and habits of a single user.
The alternative would be to constantly send user input data over the internet so that it can be processed by machine learning frameworks on a central server, and the derived algorithms sent back to the user’s device.
This is a terrible approach for many reasons:
The Core ML infrastructure instead facilitates on-device machine learning templates across a range of sectors, including image, handwriting, video, text, motion, sound and tabular-based data.
It’s much quicker to make a user-specific machine learning model from a template that’s been partially trained in the intended domain (i.e. text inference, computer vision applications, audio analysis) than to train it up from zero.
In the case of a local predictive text AI model, it’s enough to provide a partially-trained template for the target language, such as English or Spanish. A neural network in an on-device System on a Chip (SoC) can then provide incremental text-based data from user interaction until the model has adapted completely to the host.
Text data of this kind might also include transcribed speech via AI-driven audio analysis — another ‘tightly-defined’ domain that’s amenable to optimized and lightweight model templates.
In the case of video analytics, one of the most sought-after areas in IoT development, it’s not quite so simple, because the visual world is much harder to categorize and segment than with narrow domains such as text.
Consider the huge variation in data from Transport for London’s network of traffic-monitoring webcams:
The low-resolution feeds and sheer variety of shapes and fixed/periodical occlusions, combined with the peculiarities of weather and lighting, would defy any attempt to create a generic ‘traffic in London’ template model for deployment to an AI-enabled sensor network. Even if it were feasible, the ‘base’ model would need to be so comprehensive and sprawling as to preclude use in a lightweight AIoT environment.
Instead, an opportunistic AIoT approach would add local machine learning resources, either per-device or via a mesh network of nearby processing hubs, to help each installation become a ‘domain expert’ in its own particular fixed view of the world, starting from a minimally pre-trained model.
In such a case, initial training would take longer and require more human intervention (such as manually labelling certain frame-types as ‘accident’, ‘traffic jam’, etc. so that the device learns to send high-value alerts as a priority over the network noise); but over time, essential ‘event patterns’ would emerge to make early generalization easier across the entire network.
There is currently a proliferation of research initiatives to develop low-power ASICs, SoCs, and other integrated and embedded solutions for AIoT.
The most prominent market leaders at the moment are Google’s Edge Tensor Processing Unit (TPU)34, used in a famous match against Go world champion Lee Sedol35, and NVIDIA’s Jetson Nano36, a lightweight AIoT quad core unit available from US$99.
Other commercial contenders and projects in development include:
A highly configurable AI micro-processor37 from British group Xmos, priced as low as US$1.
A very low-power ARM microarchitecture38, now owned by NVIDIA.
A new architecture to accelerate deep neural networks by compressing them into on-chip SRAM, achieving 120x energy savings over comparable networks39.
A deep convolutional neural network (CNN) accelerator40 from MIT that takes advantage of the primitive generalization architectures of popular open source machine learning frameworks such as PyTorch, Caffe, and TensorFlow.
Featuring a native 4K image processor pipeline and accommodation for eight high-definition sensors, the Myriad X41 is aimed at computer vision development with an emphasis on minimal power requirements. The unit was adopted in 2017 by Google for its Clips camera42.
Designed for battery-driven sensors and wearable devices, Gap9 trades performance against endurance and offers up to 32-bit floating point computation43.
Designed for vision-based and data center deployments, the Mythic processor uses an innovative approach to weight compression in order to fit machine learning operations into very lightweight deployments by utilizing flash memory44.
Chinese SoC manufacturer Rockchip will follow up45 on the success of their RK3399Pro AIoT unit46 with this multi-platform offering, which features an NPU accelerator, support for 8Kp30 video streams, eight processor cores, and 4x Cortex-A55 chips.
Featuring a processor-in-memory design, NDP101 consumes less than 8 watts while performing inference weight operations inside Amazon’s Alexa smart assistant range of products47.
The above are only a selection from an ever-widening range48 of hardware and software architectures that are currently seeking to solve the performance vs. energy conundrum in AIoT.
The challenge of AIoT is also a spur to invention, since the sector’s future depends on fully exploiting limited local resources to run low-power and highly targeted neural networks for inference and other local machine learning workloads.
While the evolution of AIoT hardware is faced with quite rigid logistical boundaries (such as battery life, limited local resources, and the basic laws of thermodynamics), research is only now beginning to uncover new software approaches to minimize the energy drain of neural networks while keeping them performant in low-resource environments49.
This impetus is set to not only eventually benefit mobile AI, but to feed back into the wider machine learning culture over the next five or ten years.
211 Consumers Road, Suite 300
Toronto, ON M2J 1G8
Customer Support
+1 (416) 613-1177