<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:cc="http://cyber.law.harvard.edu/rss/creativeCommonsRssModule.html">
    <channel>
        <title><![CDATA[Stories by Arielle Mella  on Medium]]></title>
        <description><![CDATA[Stories by Arielle Mella  on Medium]]></description>
        <link>https://medium.com/@ariellemadeit?source=rss-4dd0706c51e6------2</link>
        
        <generator>Medium</generator>
        <lastBuildDate>Thu, 14 May 2026 02:52:38 GMT</lastBuildDate>
        <atom:link href="https://medium.com/@ariellemadeit/feed" rel="self" type="application/rss+xml"/>
        <webMaster><![CDATA[yourfriends@medium.com]]></webMaster>
        <atom:link href="http://medium.superfeedr.com" rel="hub"/>
        <item>
            <title><![CDATA[Introducing the Datadog Developer Hub]]></title>
            <link>https://ariellemadeit.medium.com/introducing-the-datadog-developer-hub-5f6e483b38a9?source=rss-4dd0706c51e6------2</link>
            <guid isPermaLink="false">https://medium.com/p/5f6e483b38a9</guid>
            <category><![CDATA[observability]]></category>
            <category><![CDATA[api]]></category>
            <category><![CDATA[datadog]]></category>
            <category><![CDATA[integration]]></category>
            <category><![CDATA[open-source]]></category>
            <dc:creator><![CDATA[Arielle Mella ]]></dc:creator>
            <pubDate>Tue, 20 May 2025 00:00:37 GMT</pubDate>
            <atom:updated>2026-01-15T19:54:23.806Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*MJA7-RlR6Oe-ZcpUf92auQ.avif" /></figure><p>Finding the right integrations, libraries, and open source tooling to extend a product has long been a challenge for developers. While Datadog has a vast offering of monitoring and observability solutions, many teams need to customize their setup in some way-whether by extending the <a href="https://www.datadoghq.com/blog/datadog-agent/">Datadog Agent</a>, integrating with third-party services, or using SDKs to interact with the <a href="https://docs.datadoghq.com/api/latest/?tab=java">Datadog API</a>. Previously, these resources were spread across various repositories and documentation pages, which made it challenging for developers to find the most suitable tool for their needs.</p><p>We are pleased to announce the launch of the <a href="https://devhub.datadoghq.com/">Datadog Developer Hub</a>, a centralized resource for developers to find components that enable them to extend the capabilities of Datadog. The Developer Hub provides a searchable catalog of official Datadog and community-contributed integrations and libraries, learning resources on how to build integrations, and information on how to join the community of Developer Hub contributors.</p><h3>Access a collection of tools to extend Datadog</h3><p>The Developer Hub brings together a variety of tools that developers can use in conjunction with Datadog, including:</p><ul><li><strong>Agent integrations</strong>: Agent integrations enable you to extend the Datadog Agent by specifying the data it can ingest, how it processes that data, and where it sends it.</li><li><strong>Core Datadog integrations</strong>: Core integrations are maintained by Datadog and provide robust, out-of-the-box functionality for monitoring databases, cloud providers, messaging systems, and more.</li><li><strong>Libraries</strong>: These include Datadog and community-contributed API and <a href="https://docs.datadoghq.com/developers/dogstatsd/?tab=hostagent">DogStatsD</a> client libraries, <a href="https://docs.datadoghq.com/tracing/">APM</a> and <a href="https://docs.datadoghq.com/profiler/">Continuous Profiler</a> libraries, serverless client libraries, command-line tools, and UI wrappers around the Datadog API.</li><li><strong>Sample apps</strong>: A growing collection of sample apps enable you to kick-start your Datadog instrumentation journey.</li></ul><p>Check out the Datadog Developer Hub today, your go-to destination for discovering integrations, libraries, tools, and learning resources that extend Datadog’s capabilities. The Developer Hub will continue to grow as new resources are added to help you customize and enhance your observability workflows. If you aren’t yet a Datadog user, you can sign up for a <a href="https://www.datadoghq.com/blog/datadog-developer-hub/">free 14-day trial</a>.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*fAtqNetHPMNvO4F-QZaIiw.png" /></figure><p><em>Originally published at </em><a href="https://www.datadoghq.com/blog/datadog-developer-hub/"><em>https://www.datadoghq.com</em></a><em> on May 20, 2025.</em></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=5f6e483b38a9" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Introducing this year’s new Datadog Ambassadors]]></title>
            <link>https://ariellemadeit.medium.com/introducing-this-years-new-datadog-ambassadors-0f79ab3ca2da?source=rss-4dd0706c51e6------2</link>
            <guid isPermaLink="false">https://medium.com/p/0f79ab3ca2da</guid>
            <category><![CDATA[open-source]]></category>
            <category><![CDATA[observability]]></category>
            <category><![CDATA[ambassador-program]]></category>
            <category><![CDATA[datadog]]></category>
            <dc:creator><![CDATA[Arielle Mella ]]></dc:creator>
            <pubDate>Tue, 13 May 2025 00:00:18 GMT</pubDate>
            <atom:updated>2025-05-16T15:34:19.035Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*XfSJkGIpvbWYhd5HxfAIUg.avif" /></figure><p>Datadog Ambassadors share their expertise through blog posts, conference talks, open source contributions, and community leadership, helping developers around the world understand observability, DevOps, security, and more. From leading community events to building custom integrations, our Ambassadors have been hard at work showcasing their Datadog chops.</p><h3><strong>What we’ve been up to</strong></h3><p>Over the past year, our Ambassador community has continued to grow-not only in size, but also in impact. From publishing technical blog posts to giving conference talks and growing local communities, our Ambassadors have been at the forefront of thought leadership in the observability space.</p><p>Whether it’s community organizing or sharing real-world experiences on cloud migrations and platform engineering, our Ambassadors continue to produce thoughtful, hands-on content. Some standout contributions from this year include:</p><p>Several Ambassadors have taken on leadership roles within their local communities, scaling <a href="https://www.datadoghq.com/blog/user-group-program/">User Groups</a> into thriving hubs of learning and collaboration. From meetups to workshops, they’re helping foster strong regional ecosystems that support ongoing learning and connection.</p><ul><li><strong>Ichiro Kano</strong> has played a key role in growing the <a href="https://jdd-ug.github.io/">Datadog Japan User Group</a> (JDDUG) into a vibrant community, regularly organizing in-person meetups and technical sessions that bring together engineers across Japan to share observability best practices.</li><li><strong>Changhyeon Yoon</strong> has been instrumental in growing the <a href="https://datadogkrug.vercel.app/">Datadog Korea User Group</a> (DDKruG), helping organize meetups that highlight real-world use cases, foster community-driven learning, and build a stronger network of Datadog users across Korea.</li></ul><p>This year, the Ambassador program is expanding with a new cohort of builders, security experts, educators, and community leaders-and we’re excited to welcome them to the community.</p><h3><strong>Meet the 2025 Datadog Ambassadors</strong></h3><p><a href="https://www.linkedin.com/in/kellybettendorf/">Kelly</a> is a Staff Security Engineer at <strong>Stavvy</strong>, focused on building practical, scalable security programs. His expertise spans detection engineering, cloud security, SIEM, and security automation-driving initiatives like detection as code, incident response, and compliance enablement. At <a href="https://www.dashcon.io/2024/breakout-sessions/detection-as-code-streamlining-security-operations-with-terraform/">DASH 2024</a>, he shared a “crawl, walk, run” framework for detection as code, helping teams bring structure and scale to their detection efforts. A passionate problem solver and active member of the <a href="https://chat.datadoghq.com/">Datadog Slack community</a>, Kelly brings high energy and curiosity to everything he does-whether it’s architecting security solutions or diving into technical challenges in his home lab.</p><p><a href="https://www.linkedin.com/in/rebecca-cottignies/">Rebecca</a> is a Cybersecurity Engineer at <strong>AssessFirst</strong>, where she leads efforts in governance, SOC, Purple Team operations, and ISO 27001 compliance. She’s particularly committed to cybersecurity governance and awareness-raising, with the passion of making this complex field accessible to everyone on her <a href="https://medium.com/@rcottignies">Medium blog</a>. As a member of <a href="https://cesin.fr/">CESIN</a>, she’s also passionate about sharing knowledge and disseminating best practices, contributing to the development of the cybersecurity community in France.</p><p><a href="https://www.linkedin.com/in/shogoh/">Shogo Hasunuma</a> is based in Japan and serves as the head of the Managed Service Provider (MSP) Section at <strong>iret, Inc.</strong> He joined the company in 2015 with no prior experience in IT engineering and has steadily built his career-from an entry-level monitoring operator to roles in infrastructure design and implementation-before being appointed as a team leader in 2019. Currently, he is committed to enhancing managed services by advancing incident response automation, adopting generative AI, and implementing comprehensive observability strategies centered around Datadog. He also supports end users in fostering autonomous DevOps practices by offering guidance on effective usage of Datadog.</p><p><a href="https://www.linkedin.com/in/youngjin-jung/">YoungJin</a> is a DevOps Engineer driving infrastructure modernization at <strong>LG UPlus</strong>. He specializes in solving complex challenges that emerge in enterprise-scale environments, focusing on building scalable, resilient systems through modern DevOps practices. With a strong emphasis on observability and performance tuning, he utilizes Datadog to instrument services, analyze performance metrics, and proactively improve system reliability and the overall user experience. He is an active AWS Community Builder, contributing to the growth of cloud-native practices within the community, and also serves as a HashiCorp Ambassador, advocating for infrastructure as code and automation across large-scale deployments. You can read more about his DevOps adventures on his <a href="https://zerone-code.tistory.com/">blog</a>.</p><p><a href="https://www.linkedin.com/in/dev-yubin/">Yubin</a> is a site reliability engineer at <strong>Karrot</strong>, focused on building resilient, scalable systems that internal developers can trust. She’s passionate about improving reliability through comprehensive monitoring, thoughtful CI/CD design, and pragmatic incident management. Yubin is a dedicated community member, always eager to share and learn alongside fellow engineers. She regularly publishes a number of blog posts on her <a href="https://velog.io/@ycoding/posts">personal blog</a> on Datadog, Kubernetes, and observability events in Korea.</p><p><a href="https://www.linkedin.com/in/michaellevan/">Michael</a> is a Kubernetes and platform engineering expert, author, consultant, and <a href="https://www.cncf.io/">CNCF</a> Ambassador. With a knack for turning complexity into clarity, he helps companies around the world level up their infrastructure-and regularly shares his expertise through <a href="https://www.cloudnativedeepdive.com/">blogs</a>, <a href="https://thenjdevopsguy.gumroad.com/l/platformengineeringplaybook">books</a>, <a href="https://thenjdevopsguy.gumroad.com/l/realworldk8scourse">courses</a>, <a href="https://sites.libsyn.com/573355/site">podcasts</a>, and conference talks.</p><p><a href="https://www.linkedin.com/in/jon-lindeheim-b227353a/">Jon</a> is an Engineering Manager at <strong>Axis Communications</strong>, overseeing the core services team for Axis Cloud Connect. With a background in architecture and DevOps, he brings a strategic and technical lens to platform engineering. Jon has shared his insights at events like <a href="https://www.dashcon.io/2024/observability-theater/axis-communications-best-practices-for-monitoring-debugging-and-optimizing-serverless-applications/">DASH 2024</a> and AWS Summits, covering topics like platform engineering, cloud transformation, and DevOps practices in real-world settings.</p><p><a href="https://www.linkedin.com/in/logan-rohloff/">Logan</a> is a cloud and observability lead at <strong>RapDev</strong>, a Datadog Premier Partner. With experience spanning cloud automation, network engineering, and system administration, Logan plays a key role in helping organizations implement and optimize their observability strategies in the most automated and scalable fashion possible. He’s deployed nearly every Datadog product and written more than a dozen custom integrations. Logan also contributed to an <a href="https://github.com/DataDog/datadog-secret-backend">open source utility</a> for managing secrets with the Datadog Agent, which was donated to Datadog in 2024. Check out more of his work on <a href="https://www.rapdev.io/blog">RapDev’s blog</a>.</p><p><a href="https://www.linkedin.com/in/niltonkazuyukiueda/">Nilton</a> is a Senior Data Executive in Business Intelligence, Data Engineering, Machine Learning and Generative AI at <strong>Deloitte</strong>, with over a decade of experience in multinational and global companies, leading large-scale corporate strategic initiatives. A longtime advocate of knowledge sharing, he actively contributes to the Brazilian tech community through technical talks, blog posts, and open discussions on cloud-native best practices.</p><p>You can check out the full list of Datadog Ambassadors and read more about the program <a href="https://www.datadoghq.com/ambassadors/">here</a>.</p><h3><strong>Thank you, Ambassadors!</strong></h3><p>The Datadog Ambassadors program continues to grow as a vibrant community of technical leaders, content creators, and practitioners who are shaping how teams around the world think about observability, reliability, and performance. We’re excited to see how this year’s Ambassadors will continue to inspire others in the year ahead.</p><p>Want to be an Ambassador? Learn more about the program <a href="https://www.datadoghq.com/ambassadors/">here</a>.</p><p><em>Originally published at </em><a href="https://www.datadoghq.com/blog/datadog-ambassadors-2025/"><em>https://www.datadoghq.com</em></a><em> on May 13, 2025.</em></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=0f79ab3ca2da" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Automated agriculture: Building a DIY hydroponic gardening system]]></title>
            <link>https://ariellemadeit.medium.com/automated-agriculture-building-a-diy-hydroponic-gardening-system-97023e70d7a6?source=rss-4dd0706c51e6------2</link>
            <guid isPermaLink="false">https://medium.com/p/97023e70d7a6</guid>
            <category><![CDATA[robotics]]></category>
            <category><![CDATA[hydroponics]]></category>
            <category><![CDATA[gardening]]></category>
            <category><![CDATA[raspberry-pi]]></category>
            <category><![CDATA[data]]></category>
            <dc:creator><![CDATA[Arielle Mella ]]></dc:creator>
            <pubDate>Tue, 17 Sep 2024 16:24:26 GMT</pubDate>
            <atom:updated>2024-09-17T16:24:26.813Z</atom:updated>
            <content:encoded><![CDATA[<p>Outdoor gardening and nurturing indoor plants have been a long-time hobby of mine. There’s something truly refreshing about being surrounded by beautiful, lush greenery.</p><p>My outdoor garden, in particular, has always been focused on flowers. I’ve carefully tended to a mix of annuals and perennials throughout the seasons, killing and bringing to life many species of flora over the years, trying to put the jungle in my concrete jungle apartment.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*yQN2Sbsf7JImAtE1.png" /><figcaption><em>A look into the current plant setup in my apartment.</em></figcaption></figure><p>I’ve always been intrigued by the idea of growing my own vegetables and herbs. The thought of cultivating fresh produce is appealing, but I’ve often felt overwhelmed by the perceived complexity-the care, knowledge, monitoring, and maintenance that produce requires.</p><p>I began to wonder: could I create a system that allows me to grow plants with the precision and care of a professional botanist and transform from city girl to farmer with a few lines of code?</p><p>This curiosity led me to the concept of building a hydroponic garden indoors.</p><p>My vision is to grow herbs and vegetables from the comfort of my own home while integrating hardware and software to collect data, leverage machine learning, and manage my garden remotely. This system would provide real-time insights into my garden’s needs, ensuring each plant gets exactly what it needs to thrive.</p><h3>Developing my hydroponic gardening system</h3><p>My research began with exploring existing indoor hydroponic grow systems.</p><p>It felt like a real-life Goldilocks situation: one system was far too extravagant-both in size and cost, priced at several months’ rent-yet it featured an appealing remote monitoring app. Another system was too small, producing only three herb varieties per month, which wouldn’t satisfy my constant need for cilantro.</p><p>As I dug deeper into the soil of my research, I stumbled upon online communities that built their own hydroponic setups. I was surprised to discover just how vast this sub-community is, with enthusiasts creating everything from compact desktop systems to large-scale setups that fill entire backyards, growing enough arugula to feed entire neighborhoods.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*ViN-toRvGH870Yxp.png" /><figcaption><em>A quick look into hydroponic gardening on TikTok shows a large and engaged community.</em></figcaption></figure><p>This vibrant community inspired me to build my own setup with Viam- one where I could have complete control over my plants’ growth, lighting, water, and nutrient distribution, and, most importantly, design the smart monitoring system of my dreams.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*9pky3i9voVTysNjF.png" /><figcaption><em>Reddit has a thriving hydroponics community, made up of 135 thousand garden enthusiasts.</em></figcaption></figure><h3>Leaning into the Kratky method for hydroponics</h3><h4>What is the Kratky method?</h4><p>After researching various hydroponic techniques, I used the Kratky method for my garden. The Kratky method is a passive form of hydroponics, making it ideal for beginners or anyone looking for a simple, cost-effective setup.</p><p>This approach allows me to monitor the root system and water nutrient levels closely, which is crucial for ensuring healthy plant growth.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/800/0*QIMmG8iJplcTEQxu.png" /><figcaption>Diagram describing the Kratky method enabling a plant’s growth. (<a href="https://www.trees.com/gardening-and-landscaping/the-gratky-method"><em>source</em></a><em>)</em></figcaption></figure><h4>How does the Kratky method work?</h4><p>The Kratky method is essentially a simplified version of the Deep Water Culture (DWC) technique, another common hydroponic method. In a typical DWC setup, plants are suspended in special pots or nets, with their roots fully submerged in a reservoir of aerated, nutrient-rich water.</p><p>This setup provides continuous access to nutrients while requiring air pumps to oxygenate the water.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/807/0*68amcd_D_Evq058G.png" /><figcaption>Diagram describing the Deep Water Culture approach, the non-ideal version for this project (<a href="https://www.trees.com/gardening-and-landscaping/the-gratky-method">source</a>)</figcaption></figure><p>The Kratky method, however, offers a clever twist on the DWC approach by eliminating the need for air pumps. Instead, it maintains a 3–4 cm gap between the plant holder and the water surface. This gap allows air to circulate around the roots, providing the necessary oxygen without the need for additional equipment.</p><p>By suspending the plants slightly above the water level instead of having them float directly on top, the Kratky method simplifies the overall hydroponic setup. It reduces both the cost and complexity while still providing an efficient way to grow a variety of plants indoors.</p><h3>Designing my hydroponic garden</h3><p>Hydroponics is relatively straightforward: germinate seeds, grow them in oxygenated water without soil, and, ideally, enjoy the benefits of a flourishing indoor garden.</p><p>To keep costs low, I started by exploring inexpensive PVC pipe systems that I could build indoors.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*IvZ7sWOcoRDp1iDS.png" /><figcaption><em>My initial concept sketch for the hydroponic garden.</em></figcaption></figure><h4>‍Setting up visual tracking</h4><p>To document the growth process of my plants, I wanted to create a timelapse with images and use the images to train custom <a href="https://www.viam.com/post/computer-vision-object-detection-guide">ML models</a>. For this, I’m using Viam to collect image data from a camera, capturing every stage of my plants’ development.</p><h3>Monitoring the garden’s environment and health</h3><p>Alongside visual tracking, I needed to to monitor crucial environmental factors such as ambient temperature, humidity, and other conditions to ensure my plants were thriving.</p><p>Viam’s built-in sensor modules make it easy to monitor these variables without having to write extensive code, streamlining the setup and management of the system.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*vXx26WfvOihFoq6q.png" /><figcaption>A look at some of the built-in environmental sensor modules found in the <a href="http://app.viam.com">Viam app</a>.</figcaption></figure><p>Maintaining the right nutrient balance is essential in hydroponics, so I needed a way to track the pH levels of the water and the nutrients I’m adding. I found an affordable pH sensor compatible with a Raspberry Pi, which I plan to integrate into <a href="http://app.viam.com/registry">Viam’s Registry</a> by creating a custom module.</p><p>I’m also incorporating flow sensors to monitor water circulation and ensure that plant roots aren’t clogging the system.</p><p>By gathering all this data, I can create a customized app using Viam, centralizing all the monitoring and data collection. This will allow me to build a truly smart hydroponic system with all the functionalities I need to maintain my garden effortlessly.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*95ktkyfS6uF5FGiJ.jpeg" /><figcaption><em>Materials needed for this project include (from left to right): PH sensor, BME280, BME680, Raspberry Pi 5, and a webcam.</em></figcaption></figure><h3>Kicking off germination</h3><p>To kickstart my indoor hydroponic garden, I bought a variety of seeds to experiment with and chose those with relatively low germination times. I decided to begin with arugula, basil, buttercrunch lettuce, cilantro, parsley, and bibb lettuce-all favorites that I love to use in salads and as meal toppings.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*GaiFNhYgIy3F_TCz.png" /><figcaption><em>Photos of the germination process on day 0 and day 12.</em></figcaption></figure><p>The germination process itself is quite hands-on: I place the seeds between damp paper towels, moisten them with filtered water to maintain a neutral pH, and then seal them in Ziploc bags to create a humid environment.</p><p>Then comes the most exciting part-waiting for the first signs of sprouting and building a robot to monitor that process.</p><iframe src="https://cdn.embedly.com/widgets/media.html?src=https%3A%2F%2Fwww.youtube.com%2Fembed%2FTE2tHIQYp7o%3Ffeature%3Doembed&amp;display_name=YouTube&amp;url=https%3A%2F%2Fwww.youtube.com%2Fwatch%3Fv%3DTE2tHIQYp7o&amp;image=https%3A%2F%2Fi.ytimg.com%2Fvi%2FTE2tHIQYp7o%2Fhqdefault.jpg&amp;key=a19fcc184b9711e1b4764040d3dc5c07&amp;type=text%2Fhtml&amp;schema=youtube" width="854" height="480" frameborder="0" scrolling="no"><a href="https://medium.com/media/5cadfffa56d5ca141b83cb28d38511d2/href">https://medium.com/media/5cadfffa56d5ca141b83cb28d38511d2/href</a></iframe><h3>Building the Germination Station</h3><p>While waiting for the seeds to sprout, I used Viam to set up a simple yet effective monitoring system to ensure ideal conditions for germination. I built this system using a webcam, a BME680 environmental sensor, and a Raspberry Pi, allowing me to capture thousands of images to <a href="https://www.viam.com/post/diy-hydroponic-gardening-system-build">visually track the growth process</a>.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*2NlTIqkPx-cipAnm.png" /><figcaption><em>Monitoring the seeds’ germination within Viam’s Data Management Service.</em></figcaption></figure><p>Alongside the images, I also gathered sensor data to monitor critical environmental factors such as temperature, pressure, humidity, and gas levels-all essential for maintaining a balanced greenhouse environment.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*yTsFTT6J5LZI6v09.png" /><figcaption>Real-time sensor data visualized through <a href="http://app.viam.com">Viam’s app</a> interface.</figcaption></figure><p>For successful germination, I aimed to keep the temperature within the ideal range of 65–70 degrees Fahrenheit. Initially, I placed the germination bags by a window in a plastic container, but the sensors quickly showed that temperatures reached up to 90 degrees in direct sunlight.</p><p>This data alerted me to relocate my seedlings promptly to prevent them from overheating and ensure they stayed in a nurturing environment.</p><iframe src="https://cdn.embedly.com/widgets/media.html?src=https%3A%2F%2Fwww.youtube.com%2Fembed%2FSzEjD9VdqBI%3Ffeature%3Doembed&amp;display_name=YouTube&amp;url=https%3A%2F%2Fwww.youtube.com%2Fwatch%3Fv%3DSzEjD9VdqBI&amp;image=https%3A%2F%2Fi.ytimg.com%2Fvi%2FSzEjD9VdqBI%2Fhqdefault.jpg&amp;key=a19fcc184b9711e1b4764040d3dc5c07&amp;type=text%2Fhtml&amp;schema=youtube" width="854" height="480" frameborder="0" scrolling="no"><a href="https://medium.com/media/b7ddd73810d04c14958d664fd9a0c3c4/href">https://medium.com/media/b7ddd73810d04c14958d664fd9a0c3c4/href</a></iframe><h3>Taking my DIY hydroponic system to the next level</h3><p>This project began with a simple question: Can I grow my herbs and vegetables indoors with the precision of a professional botanist?</p><p>By combining a DIY hydroponic setup with a smart monitoring system using Viam, I was able to create an environment where I could control and optimize every aspect of plant growth, from the nutrient levels in the water to environmental conditions.</p><p>Using sensors, a webcam, and a Raspberry Pi 5, I’ve built a system to help monitor key variables like temperature, humidity, and the visual rate of growth remotely to ensure ideal growing conditions. This approach has not only demystified the process of hydroponic gardening but also made it more accessible and manageable for a small indoor setting.</p><p>There’s still work to be done, many variables to test and refine, and more data to collect as the plants grow. The progress so far proves that with the right tools and the power of open-source software, it’s entirely possible to design a smart hydroponic system. The next steps in this project include:</p><ul><li>Training custom ML models on the plants’ growth to use that data for future crops</li><li>Building a module for a pH sensor to track nutrient changes in the hydroponic setup</li><li>Building a custom monitoring app leveraging Viam’s APIs to have a dashboard of information available about the garden</li></ul><h3>Build your own hydroponic gardening system</h3><p>If you’re considering starting your own smart hydroponic garden, don’t let the perceived complexity hold you back- start small, learn as you go, build your system modularly, and build a system with Viam that fits your plant-growing needs.</p><p>Stay tuned for my step-by-step tutorial to help you get started. In the meantime, check out our <a href="http://codelabs.viam.com">Codelabs</a> for more tutorials, and if you have any questions, join the conversation in our <a href="http://discord.viam.gg">Discord community</a> or connect with me on <a href="https://www.tiktok.com/@ariellemadeit">TikTok</a>!</p><p><em>Originally published at </em><a href="https://www.viam.com/post/diy-hydroponic-gardening-system-build"><em>https://www.viam.com</em></a><em>.</em></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=97023e70d7a6" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Deploying Hugging Face models on any robot in the real world]]></title>
            <link>https://blog.devgenius.io/deploying-hugging-face-models-with-viam-use-models-on-any-robot-in-the-real-world-67ea17b1b20b?source=rss-4dd0706c51e6------2</link>
            <guid isPermaLink="false">https://medium.com/p/67ea17b1b20b</guid>
            <category><![CDATA[robotics]]></category>
            <category><![CDATA[computer-vision]]></category>
            <category><![CDATA[machine-learning]]></category>
            <category><![CDATA[hugging-face]]></category>
            <category><![CDATA[ai]]></category>
            <dc:creator><![CDATA[Arielle Mella ]]></dc:creator>
            <pubDate>Thu, 08 Aug 2024 21:03:03 GMT</pubDate>
            <atom:updated>2026-01-15T17:55:02.084Z</atom:updated>
            <content:encoded><![CDATA[<p>Hugging Face is a vibrant hub for the machine learning community, offering an extensive collection of open-source computer vision and large language models. This ecosystem enables developers to contribute, find, and utilize a diverse range of models and datasets making machine learning accessible to all. <a href="https://www.viam.com/">Viam</a> provides an open-source platform for configuring, controlling, and deploying custom code on robots, IoT devices, and smart machines out in the world. With Viam’s <a href="https://www.viam.com/product/registry">Registry</a>, the Viam developer community can share custom modular components and services that can be used on physical machines, much like the model and data-sharing ecosystem at Hugging Face.</p><p>Computer vision enables robots to perceive and interact with their surroundings. Viam’s <a href="https://www.viam.com/use-cases/computer-vision">Vision Service</a> supports advanced capabilities such as real-time object detection, classification, and 3D segmentation, allowing robots to understand and dynamically respond to the world. Additionally, Viam offers custom model training using images collected with Viam, allowing developers to tailor solutions to specific applications. But with the vast array of datasets and models available with Hugging Face, how can developers leverage these resources to enhance robotic capabilities even further?</p><p>To harness the power of Hugging Face models on Viam machines, the Viam community has contributed custom Vision Service modules that integrate YOLOv5 and YOLOv8 inference libraries to the registry for real-time detections. Configuration and testing of these models on hardware requires no code up front. The <a href="https://app.viam.com/module/viam-labs/YOLOv5">YOLOv5</a> module offers ease of use, making this model a popular choice for developers looking to quickly deploy computer vision models. The <a href="https://app.viam.com/module/viam-labs/YOLOv8">YOLOv8</a> module, on the other hand, is designed for speed and accuracy, making it ideal for applications that require real-time object detection.</p><p>The choice between the two depends on the specific requirements of the application, whether it’s ease of deployment or the need for high-performance models in dynamic environments. There are other modules on the Viam registry that leverage other Huggingface models like LLMs and beyond, making it easy for developers to leverage AI models from Hugging Face on their machines in the real world.</p><h3><strong>Deploying a Hugging Face model on a Viam machine</strong></h3><p>To use this module, first <a href="https://docs.viam.com/cloud/machines/">create a machine instance</a> on the <a href="https://app.viam.com/">Viam app</a>. For this guide, the only hardware necessary will be a computer to run <strong>viam-server</strong> and a webcam to show detections or classifications from a Hugging Face model of choice.</p><p>Set up the machine according to the instructions in the <strong>Set up your machine part </strong>guide in the Viam app. Once the machine is connected and live, head to the <strong>Configure</strong> tab to begin configuring a webcam and the vision service that will run the Hugging Face model.</p><p>In the <strong>Builder</strong> panel, add a component using the ‘<strong>+</strong>’ icon. Select ‘<strong>Component</strong>’. Search for the camera model ‘<strong>webcam</strong>’ and add it to the machine configuration.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*N6vjgzbwa4nVMqli" /></figure><p>Next, add a vision service that will leverage the configured camera to run detections. Using the ‘<strong>+</strong>’ icon, select ‘<strong>Service</strong>’ and search for the ‘<strong>vision / yolov8</strong>’ model to add to the machine configuration. Either YOLO module will work for this tutorial.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*8Rm5WtG4KPQbu3Y9" /></figure><p>In order to leverage a Hugging Face model with a Viam YOLO module, find a model compatible with the model inference library selected. Browse through the Hugging Face model database and select a compatible model. In the <a href="https://github.com/viam-labs/YOLOv8">GitHub example</a>, a model for hard hat detection is used, which can be used on a security system designed for construction sites to ensure safety measures are followed.</p><p>Being a sneaker lover and having a closet that is overflowing, I want to use a shoe classification model to help me figure out what percentage of shoe brands I have in my sneaker collection to help me downsize (or have an excuse to buy more). I’m selecting <a href="https://huggingface.co/keremberke/yolov8m-shoe-classification">YOLOv8 Shoe Classification Model</a> from the Hugging Face models library, uploaded by user <a href="https://huggingface.co/keremberke">@keremberke</a>.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*wIHVbWamWtZZHHGd" /></figure><p>After the module is added to the machine, configure the custom attributes for the Vision Service. The following attributes are available for the model:</p><p><strong>Name: </strong>model_location</p><p><strong>Type: </strong>string</p><p><strong>Inclusion: </strong>Required</p><p><strong>Description: </strong>Local path or HuggingFace model identifier</p><p>Because I am using a model hosted on Hugging Face, all I need to add is the path after the URL slug.</p><pre>{<br>&quot;model_location&quot;: &quot;keremberke/yolov8n-shoe-classification&quot;<br>}</pre><p>If using a locally downloaded model, the configuration syntax is as follows:</p><pre>{<br>&quot;model_location&quot;: &quot;/path/to/yolov8n.pt&quot;<br>}</pre><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*V6B0HCkYXSshUSvj" /></figure><p>The final configuration step is to configure a <a href="https://docs.viam.com/components/camera/transform/">transform camera</a> component, which is a pipeline for applying transformations to an input image source. Add a new <strong>Component</strong>, search for <strong>‘camera / transform</strong>’, and configure the pipeline as shown. For this transform camera, the transformations will be layered over the webcam feed to show classifications in real-time. Specify the classifier name, which will be the previously configured vision service named ‘<strong>yolov8</strong>’. Add the <strong>‘webcam</strong>’ in the ‘<strong>Depends On</strong>’ attribute field. If a detector is chosen, follow the instructions to set up <a href="https://docs.viam.com/components/camera/transform/">detections</a> for a transform camera.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*zeBTWVt3oHtLUSme" /></figure><p>Once the pipeline is set up, it is time to test the Hugging Face model on a webcam, showing real-time classifications. This setup works whether you are testing on a computer or have configured a machine that exists out in the world, allowing AI capabilities to enhance the performance of a machine in many applications.</p><h3><strong>Next steps</strong></h3><p>By leveraging the YOLO modules in Viam’s Registry, developers can use state-of-the-art object detection algorithms modularly with minimal upfront coding. This flexibility allows for rapid prototyping and deployment, catering to a wide range of applications from security systems to automation in a warehouse to cleaning up your closet for your next home improvement project. After testing Hugging Face models on a Viam machine, the next step is to write custom code using Viam’s unified <a href="https://docs.viam.com/services/vision/#api">API’s</a> and flexible <a href="https://docs.viam.com/sdks/">SDK’s</a> offered in different languages. Explore the <a href="https://app.viam.com/registry">Viam Registry</a> and contribute to the open-source ecosystem by adding more features, models, and integrations for machine learning applications. Whether organizing your closet or creating a new home security system: Computer vision has so many potential applications, go <a href="https://app.viam.com/robots">start building</a> your next prototype with Viam today.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=67ea17b1b20b" width="1" height="1" alt=""><hr><p><a href="https://blog.devgenius.io/deploying-hugging-face-models-with-viam-use-models-on-any-robot-in-the-real-world-67ea17b1b20b">Deploying Hugging Face models on any robot in the real world</a> was originally published in <a href="https://blog.devgenius.io">Dev Genius</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Using Tableau to visualize sensor data in the real world]]></title>
            <link>https://blog.devgenius.io/harnessing-the-power-of-tableau-to-visualize-sensor-data-ef39ea66fd3a?source=rss-4dd0706c51e6------2</link>
            <guid isPermaLink="false">https://medium.com/p/ef39ea66fd3a</guid>
            <category><![CDATA[data]]></category>
            <category><![CDATA[tableau]]></category>
            <category><![CDATA[robotics]]></category>
            <category><![CDATA[data-science]]></category>
            <category><![CDATA[database]]></category>
            <dc:creator><![CDATA[Arielle Mella ]]></dc:creator>
            <pubDate>Mon, 03 Jun 2024 17:51:35 GMT</pubDate>
            <atom:updated>2024-12-31T19:21:36.641Z</atom:updated>
            <content:encoded><![CDATA[<p>Data scientists often face the challenge of making complex and large amounts of data both comprehensible and actionable. With Viam, you can collect and capture sensor data that might be difficult to understand without the right visualization tools.</p><p>Enter <a href="https://www.tableau.com/trial/visualize-your-data">Tableau</a>, a popular and powerful tool that’s a game-changer for data visualization. By leveraging Tableau, you can transform heaps of raw sensor data from Viam’s data management service into insightful visualizations. This not only makes the data easier to interpret but also enhances inference, decision-making, trend predictions, and more.</p><h3>Connecting Viam sensor data to Tableau</h3><p>The first step in making beautiful charts and beyond with sensor data through Viam is to collect enough data to visualize. You can learn how to do this in our <a href="https://docs.viam.com/data/capture/">Data Capture docs</a>. Make sure to enable cloud sync so you can access all of your data from the data management service where the sensor data is synced and stored.</p><p>The next step is to <a href="https://docs.viam.com/data/query/#configure-data-query">configure database</a> access for your Organization by using the Viam CLI to set up a new read-only database user for the Viam organization’s data. Using this direct database connection in Tableau enables seamless data importing for your visualization.</p><p>You will need to use <a href="https://www.tableau.com/products/desktop/download">Tableau for Desktop</a>, instead of the cloud version, to configure the direct database connectivity with the MongoDB Tableau Connector.</p><p>First, download the <a href="https://www.mongodb.com/try/download/jdbc-driver">Mongo DB JDBC Driver</a> and move the downloaded .jar file into the <a href="https://www.mongodb.com/docs/atlas/data-federation/query/sql/tableau/connect/#download-the-mongodb-jdbc-driver.">appropriate directory</a> for your operating system. Then, download the <a href="https://www.mongodb.com/try/download/tableau-connector">MongoDB Tableau Connector</a> and move the downloaded .taco file into the ‘Connectors’ folder in the ‘My Tableau Repository’ folder in your documents. This allows the creation of a custom connector that enables Tableau to communicate with the MongoDB Atlas product.</p><p>Create a new database connection in the desktop app and search for ‘MongoDB Atlas by MongoDB’ under ‘Installed Connectors’. Authenticate with the connector using the credentials set for your Organization using the Viam CLI.</p><ul><li><strong>MongoDB URI</strong>: ‘mongodb://&lt;YOUR DATA FEDERATION HOSTNAME STRING&gt;’</li><li><strong>Database: </strong>‘sensorData’</li><li><strong>Username: </strong>‘db-user-&lt;YOUR ORG ID&gt;</li><li><strong>Password</strong>: Whatever password you set when configuring the database.</li></ul><p>You can then choose to import all of the data from your organization, or you can filter down to specific parameters such as a particular machine or sensor component if you want to narrow down the data included in Tableau.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*t1as6FQFTcPwW8JF.png" /></figure><p>‍</p><p>When you have all your desired data loaded into Tableau with a live data source, you can begin organizing your data by splitting sensor readings into different table rows, renaming your readings, and organizing them into tables however you want. Once you have organized your data sources into different sub-tables, it is as simple as dragging and dropping the desired sub-tables into different visualization templates.</p><p>‍</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/600/0*APdZoM9ztDGFzGbW.gif" /></figure><p>Once the data is imported into Tableau, the next step is to design visualizations that convey meaningful insights. You can consider different visualizations for different types of data interpretation:</p><p>‍ <strong>Time-Series Analysis:</strong> Line charts are the go-to for spotting trends and patterns over time. This is particularly useful for monitoring sensor data such as temperature, humidity, or motion over specific periods.</p><p>‍</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*lBcj3PqFlWWiQE5o.png" /></figure><p>‍</p><p><strong>Interactive Dashboards</strong>: Build interactive dashboards that let users dig into specific data points, apply filters, and see real-time updates. Dashboards can make your data more engaging and help you dive deeper into the analysis at a glance.</p><p>‍</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*7_7-W65BJctixe0B.png" /></figure><p>‍</p><p><strong>Heatmaps and Scatter Plots:</strong> Want to spot correlations and anomalies? Heatmaps and scatter plots can be the answer. A heatmap can show you the intensity of sensor readings across different areas, while a scatter plot might uncover relationships between various sensor data types.</p><p>‍</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/970/0*RNFy1aaUUQ8g8Pe6.png" /></figure><h3>The benefits of using Tableau for sensor data</h3><p>By visualizing Viam data in Tableau, data scientists can turn a data dump into valuable insights that drive smarter decision-making and more. Tableau’s extensive customization options let you tailor visualizations to specific needs and preferences, ensuring the most relevant insights are front and center for analysis.</p><p>Additionally, thanks to Tableau’s live data connections, you can keep an eye on sensor data in real-time, allowing for quick responses to critical changes as your machine is collecting data in the real world. Get started with visualizing important machine data with <a href="https://viam.com/">Viam</a> today.</p><p><em>Originally published at </em><a href="https://www.viam.com/post/harnessing-the-power-of-tableau-to-visualize-sensor-data"><em>https://www.viam.com</em></a><em>.</em></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=ef39ea66fd3a" width="1" height="1" alt=""><hr><p><a href="https://blog.devgenius.io/harnessing-the-power-of-tableau-to-visualize-sensor-data-ef39ea66fd3a">Using Tableau to visualize sensor data in the real world</a> was originally published in <a href="https://blog.devgenius.io">Dev Genius</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Build a smart pet feeder with machine learning and Python]]></title>
            <link>https://ariellemadeit.medium.com/build-a-smart-pet-feeder-with-machine-learning-eee486dcee50?source=rss-4dd0706c51e6------2</link>
            <guid isPermaLink="false">https://medium.com/p/eee486dcee50</guid>
            <category><![CDATA[machine-learning]]></category>
            <category><![CDATA[robotics]]></category>
            <category><![CDATA[raspberry-pi]]></category>
            <category><![CDATA[computer-vision]]></category>
            <category><![CDATA[python]]></category>
            <dc:creator><![CDATA[Arielle Mella ]]></dc:creator>
            <pubDate>Fri, 01 Mar 2024 00:15:28 GMT</pubDate>
            <atom:updated>2024-12-31T19:22:20.587Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/518/0*Am3ciAo6p-HEVC2e.png" /></figure><p>If your dog is as insatiable as mine, you are familiar with having to wake up every morning to the sound of gentle whining at the door and the pitter patter of begging paws on the floor — two hours before the alarm. The sun has barely risen on the horizon as you glance out your east facing window, and you can see a moist nose peering under your door frame. Your dog commands you: it’s time to eat.</p><p>To allow myself to get some extra zzz’s before work, I’ve built a robot to feed my dog in the morning. It has also come in useful to give him some treats for being a Good Boy™ while I’m spending my day at the office.</p><p>In this tutorial you can follow along and build your own pet feeder. You will use the Viam app’s <a href="https://docs.viam.com/data/">Data Manager</a> to train a custom machine learning model that recognizes your pet and use the <a href="https://docs.viam.com/ml/">Machine Learning Service</a> and the <a href="https://docs.viam.com/ml/vision/">vision service</a> to use it on your robot. The final component is a stepper motor and a 3D printed model which holds and dispenses treats when your pet is recognized.</p><h3>Hardware</h3><p>You will need the following hardware components:</p><ol><li>A computer running macOS or Linux</li><li><a href="https://www.raspberrypi.com/products/raspberry-pi-4-model-b/">Raspberry Pi</a> with <a href="https://www.amazon.com/Amazon-Basics-microSDXC-Memory-Adapter/dp/B08TJTB8XS/ref=sr_1_4">microSD card</a> (and <a href="https://www.amazon.com/Card-Reader-Beikell-Memory-Adapter/dp/B09Z6JCKL7/ref=sr_1_3">microSD card reader</a>), with viam-server installed following the <a href="https://docs.viam.com/get-started/installation/">Installation Guide</a>.</li><li><a href="https://www.amazon.com/Raspberry-Model-Official-SC0218-Accessory/dp/B07W8XHMJZ/ref=asc_df_B07W8XHMJZ/">Raspberry Pi power supply</a></li><li><a href="https://makersportal.com/shop/nema-17-stepper-motor-kit-17hs4023-drv8825-bridge">Stepper motor and motor driver</a></li><li><a href="https://www.amazon.com/ABLEGRID-12-Volt-Power-Supply/dp/B009ZZKUPG/ref=asc_df_B009ZZKUPG/">12V power supply adaptor for motor driver</a></li><li><a href="https://www.amazon.com/wansview-Microphone-Streaming-Conference-Teaching/dp/B08XQ3TWFX/ref=sr_1_18_sspa">Simple USB powered webcam</a></li><li>Assorted jumper wires</li><li><a href="https://www.amazon.com/Cicidorai-M3-0-5-Button-Machine-Quantity/dp/B09TKP6C6B/ref=sr_1_9">Four 16mm or 20mm M3 screws</a></li></ol><h3>Tools and other materials</h3><p>You will also need the following tools and materials:</p><ol><li>Wide mouth Mason Jar or <a href="https://www.amazon.com/Ninja-Single-16-Ounce-Professional-Blender/dp/B07Q23X5WP/ref=sr_1_17">blender cup</a> (if you want to avoid using glass!)</li><li>Small pet treats or dry kibble</li><li>Tools for assembly such as screwdrivers and allen keys</li><li>3D printer (or somewhere you can order 3D printed parts from)</li><li><a href="https://github.com/viam-labs/smart-pet-feeder">3D printed STL models</a>, wiring, and configuration recommendations.</li></ol><h3>Software</h3><p>You will need the following software:</p><ul><li><a href="https://www.python.org/download/releases/3.0/">Python 3</a></li><li><a href="https://pip.pypa.io/en/stable/#">pip</a></li><li><a href="https://docs.viam.com/get-started/installation/#install-viam-server">viam-server</a> installed to your board. If you haven’t done this, we’ll walk you through it in the next section.</li></ul><h3>Assemble your robot</h3><p>The STL files for the smart feeder robot are available on <a href="https://github.com/viam-labs/smart-pet-feeder">GitHub</a>.</p><p>There should have been a video here but your browser does not seem to support it.</p><ol><li>Mount your Raspberry Pi to the side of the main body of your pet feeder using the provided mounting screw holes.</li><li>Connect your power source to the Pi through the side hole.</li><li>Mount your webcam to the front of your pet feeder. Connect the USB cable to your Pi.</li><li>Insert the 3D printed stepper motor wheel into your pet feeder. This is what will funnel treats out of your pet feeder programmatically.</li><li>Place your stepper motor into the motor holder part and gently slide the wires through the hole that leads through the body of your feeder and feeds the cables out on the Raspberry Pi side.</li><li>Slide the motor driver holder into the body of your feeder, it should sit flush and fit nicely.</li><li>Connect your stepper motor to the motor driver according to this wiring diagram:</li></ol><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*Qzfh_4GsKhD8OpG3.png" /></figure><h3>Configure and test your robot</h3><ol><li>If you haven’t already, set up the Raspberry Pi by following our <a href="https://docs.viam.com/get-started/installation/prepare/rpi-setup/">Raspberry Pi Setup Guide</a>.</li><li>Go to <a href="https://app.viam.com">the Viam app</a> and create a new machine instance in your preferred organization.</li><li>Then follow the instructions on the <strong>Setup</strong> tab.</li></ol><p>Now that you’ve set up your robot, you can start configuring and testing it.</p><h3>Configure your board</h3><p>Head to the <strong>Config</strong> tab on your machine’s page. Click on the <strong>Components</strong> subtab and click the <strong>Create component</strong> button in the lower-left corner.</p><p>Select board as the type and pi as the model. Name the component pi, then click <strong>Create</strong>.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*o7ZqUyIYil1LZcMC.png" /></figure><h3>Configure your <a href="https://docs.viam.com/components/camera/webcam/">webcam</a></h3><p>Click <strong>Create component</strong> and add your webcam with type camera and model webcam. Name the component petcam, then click <strong>Create</strong>.</p><p>Click on the <strong>video path</strong>. If the robot is connected, a dropdown menu with available cameras will appear. Select your camera.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*tH5faOzfSw5eDx_4.png" /></figure><blockquote>TIP</blockquote><blockquote>If you are unsure which camera to select, select one, save the configuration and go to the <a href="https://docs.viam.com/components/camera/webcam/#view-the-camera-stream"><strong>Control</strong> tab</a> to confirm you can see the expected video stream.</blockquote><h3>Configure your <a href="https://docs.viam.com/components/motor/gpiostepper/">stepper motor</a></h3><p>Finally, click <strong>Create component</strong> and add another component with type motor and model gpiostepper.</p><ol><li>If you used the same pins as in the wiring diagram, set the <strong>direction</strong> to pin 15 GPIO 22, and the <strong>step</strong> logic to pin 16 GPIO 23.</li><li>Set the <strong>Enable pins</strong> toggle to low, then set the resulting <strong>Enabled Low</strong> dropdown to pin 18 GPIO 24.</li><li>Set the <strong>ticks per rotation</strong> to 400 and select your board model, pi.</li></ol><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*g79m_DkvBIfxrMKo.png" /></figure><p>Click <strong>Save config</strong> in the bottom left corner of the screen.</p><p>To test everything is wired and configured correctly, head to the <a href="https://docs.viam.com/fleet/machines/#control">Control tab</a>. Start by testing the motor. Click on the motor panel and set the <strong>RPM</strong> to 20 and <strong># of Revolutions</strong> to 100 to see your treat dispensing mechanism in action. Feel free to tweak these values to achieve the desired speed of your dispenser.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*UMwrtiqObss_gD1h.png" /></figure><p>Next, test your camera. Click on the camera panel and toggle the camera on. Now check if you can see your pet! Your pet may be a little skeptical of your robot at first, but once you get some treats in there, your furry friend will love it in no time!</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/930/0*DvbXGsE9bUl7KYCT.png" /></figure><h3>Use machine learning to recognize your pet</h3><p>Let’s make our pet feeder smart with some data capture and machine learning models! To do that, you’ll first have to configure <a href="https://docs.viam.com/data/">Data Management</a> to capture images. Then you can use these images to train a machine learning model on your pet.</p><h3>Configure data management</h3><p>To enable the <a href="https://docs.viam.com/data/capture/">data capture</a> on your robot, do the following:</p><ol><li>Under the <strong>Config</strong> tab, select <strong>Services</strong>, and navigate to <strong>Create service</strong>. Here, you will add a service so your robot can sync data to the Viam app in the cloud.</li><li>For <strong>type</strong>, select <strong>Data Management</strong> from the dropdown, and give your service a name. We usedpet-data for this tutorial.</li><li>Ensure that <strong>Data Capture</strong> is enabled and <strong>Cloud Sync</strong> is enabled. Enabling data capture here will allow you to view the saved images in the Viam app and allow you to easily tag them and train your own machine learning model. You can leave the default directory as is. This is where your captured data is stored on-robot. By default, it saves it to the~/.viam/capture directory on your machine.</li></ol><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*yfOLhcmVUy6uFNAl.png" /></figure><p>Next, enable the Data Management service on the camera component on your robot:</p><ol><li>Go to the <strong>Components</strong> tab and scroll down to the camera component you previously configured.</li><li>Click <strong>+ Add method</strong> in the <strong>Data Capture Configuration</strong> section.</li><li>Set the <strong>Type</strong> to ReadImage and the <strong>Frequency</strong> to 0.333. This will capture an image from the camera roughly once every 3 seconds. Feel free to adjust the frequency if you want the camera to capture more or less image data. You want to capture data quickly so that you have as many pictures of your pet as possible so your classifier model can be very accurate. You should also select the Mime Type that you want to capture. For this tutorial, we are capturingimage/jpeg data.</li></ol><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*pOceOGux8UTLj9du.png" /></figure><h3>Capture images of your pet</h3><p>Now it’s time to start collecting images of your beloved pet. Set your feeder up near an area your pet likes to hang out like your couch or their bed or mount it temporarily over their food bowl, or even just hold it in front of them for a couple of minutes. You can check that data is being captured by heading over to the <a href="https://app.viam.com/data/view"><strong>DATA</strong> page</a> and filtering your image data to show just images from the location your pet feeder is in. Capture as many images as you want. If possible, capture your pet from different angles and with different backgrounds. Disable Data Capture after you’re done capturing images of your pet.</p><h3>Create a dataset and tag images</h3><p>Head over to the <a href="https://app.viam.com/data/view"><strong>DATA</strong> page</a> and select an image captured from your machine. After selecting the image, you can type a custom tag for some of the objects you see in the image and you add it to a dataset. The first thing you want to consider is what tags you are trying to create and how you want your custom model to function.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*rhgg_Z7k-ZuOw_Nv.png" /></figure><p>For the treat dispenser, you can tag images with the name of the pet, in our case toast. Notice that in our image collection, we captured images at different angles and with different background compositions. This is to ensure that our model can continue to recognize the object no matter how your robot is viewing it through its camera. To be able to train on the data you are tagging you also need to add each image to a dataset.</p><p>Begin by selecting the image you would like to tag, and you will see all of the data that is associated with that image. Type in your desired tag in the Tags section.</p><p>Be mindful of your naming as you can only use alphanumeric characters and underscores: this is because the model will be exported as a .tflite file with a corresponding .txt file for labeling.</p><p>There should have been a video here but your browser does not seem to support it. There should have been a video here but your browser does not seem to support it.</p><p>Then use the Datasets dropdown to create a new dataset and assign the image to it. We called our dataset petfeeder.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/600/1*-tnP4RSRvirmaxHGvR7p9Q.gif" /></figure><p>For each image, add tags to indicate whether it contains your pet and add the image to your dataset.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/550/0*p5vKtGkGx2iq5iEJ.png" /></figure><p>Note we are just tagging the whole image as we are training an image classification model.</p><p>Continue parsing through your collected data (in this case images) and tag away and assign to your dataset to your heart’s desire. Tag as many images with as many tags until you are happy with your dataset. This is important for the next step.</p><h3>View your dataset</h3><p>Upon completion of tagging your data set, you can view the data in your dataset by clicking on your dataset’s name on the image sidebar or on the <a href="https://app.viam.com/data/datasets"><strong>DATASETS</strong> subtab</a>.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*bDMcjoqAaaAAMryq.png" /></figure><h3>Train a model</h3><p>From the dataset view, click on <strong>Train model</strong>, name your model and select <strong>Single label</strong> as the model type. Then select the label or labels that you used to label your pet images. We called it petfeeder and selected the tag toast and no-toast to train on images of the pup and images that do not contain the pup.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/600/1*ZkxWDT_9735ZZkFQ6K8_iQ.gif" /></figure><p>If you want your model to be able to recognize multiple pets you can instead create a <strong>Multi Label</strong> model based on multiple tags. Go ahead and select all the tags you would like to include in your model and click <strong>Train Model</strong>.</p><h3>Deploy your model to your robot</h3><p>Once the model has finished training, deploy it by adding an <a href="https://docs.viam.com/ml/">ML model service</a> to your robot:</p><ol><li>Navigate to the machine page on the Viam app. Click to the <strong>Config</strong> tab, then select the <strong>Services</strong> subtab.</li><li>Click <strong>Create service</strong> in the lower-left corner.</li><li>Select ML Model as the type, and select TFLite CPU as the model.</li><li>Enter puppyclassifier as the name, then click <strong>Create</strong>.</li><li>To configure your service and deploy a model onto your robot, select <strong>Deploy Model On Robot</strong> for the <strong>Deployment</strong> field.</li><li>Select your trained model (petfeeder) as your desired <strong>Model</strong>.</li></ol><h3>Use the vision service to detect your pet</h3><p>To detect your pet with your machine learning model, you need to add a <a href="https://docs.viam.com/ml/vision/">vision service</a> that uses the model and a <a href="https://docs.viam.com/components/camera/transform/">transform camera</a> that applies the vision service to an existing camera stream and specifies a confidence threshold:</p><ol><li>From the <strong>Services</strong> subtab, click <strong>Create service</strong> in the lower-left corner.</li><li>Select Vision as the type and ML Model as the model.</li><li>Enter a name for your ML model service and click <strong>Create</strong>.</li><li>Select the model you previously created in the dropdown menu.</li><li>Navigate to the <strong>Components</strong> subtab and click <strong>Create component</strong> in the lower-left corner.</li><li>Create a <a href="https://docs.viam.com/components/camera/transform/">transform camera</a> by selecting type camera and model transform.</li><li>Enter classifier_cam as the name for your camera, then click <strong>Create</strong>.</li><li>Replace the JSON attributes with the following object which specifies the camera source for the transform cam and also defines a pipeline that adds the classifier you created.</li></ol><pre>{<br>&quot;source&quot;: &quot;petcam&quot;,<br>&quot;pipeline&quot;: [<br>  {<br>      &quot;attributes&quot;: {<br>          &quot;classifier_name&quot;: &quot;puppyclassifier&quot;,<br>          &quot;confidence_threshold&quot;: 0.9<br>      },<br>      &quot;type&quot;: &quot;classifications&quot;<br>  }<br>]<br>}</pre><p>9. Head to your robots <strong>Control</strong> tab, click on your transform cam, toggle it on and you should be able to view your transform cam and if pointed at your pet, it should show it detecting your pet!</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*iZUCfuPiMxbvrbWb.png" /></figure><h3>Control your robot programmatically</h3><p>With your robot configured, you can now add a program to your robot that controls the pet feeder when executed, using the <a href="https://docs.viam.com/build/program/apis/">Viam SDK</a> in the language of your choice. This tutorial uses Python.</p><h3>Set up your Python environment</h3><p>Open your terminal and ssh into your Pi. Run the following command to install the Python package manager onto your Pi:</p><pre>sudo apt install python3-pip</pre><p>Create a folder named <strong>petfeeder</strong> for your code and create a file called <strong>main.py</strong> inside.</p><p>The <a href="https://python.viam.dev/">Viam Python SDK</a> allows you to write programs in the Python programming language to operate robots using Viam. To install the Python SDK on your Raspberry Pi, run the following command in your existing ssh session to your Pi:</p><pre>pip3 install --target=petfeeder viam-sdk python-vlc</pre><blockquote>Important</blockquote><blockquote>If you want your robot to automatically run your code upon startup, it is important to install the package into the petfeeder folder because of how the Viam platform runs the process.</blockquote><h3>Add the connection code</h3><p>Go to your robot’s page on <a href="https://app.viam.com">the Viam app</a> and navigate to the <strong>Code sample</strong> tab. Select <strong>Python</strong>, then copy the generated sample code and paste it into the main.py file.</p><blockquote>API key and API key ID</blockquote><blockquote>By default, the sample code does not include your machine API key and API key ID. We strongly recommend that you add your API key and API key ID as an environment variable and import this variable into your development environment as needed.</blockquote><blockquote>To show your machine’s API key and API key ID in the sample code, toggle <strong>Include secret</strong> on the <strong>Code sample</strong> tab. You can also see your API key and API key ID on your machine’s <strong>Security</strong> tab.</blockquote><h4>Caution</h4><p><strong>Do not share your API key or machine address publicly. Sharing this information could compromise your system security by allowing unauthorized access to your machine, or to the computer running your machine.</strong></p><p>Save the file and run this command to execute the code:</p><pre>python3 main.py</pre><p>When executed, this sample code connects to your robot as a client and prints the available resources.</p><h3>Add the logic</h3><p>If your program ran successfully and you saw a list of resources printed from the program, you can continue to add the robot logic.</p><p>You’ll be using the puppyclassifier. The following code initializes a camera and the puppyclassifier and shows you how to get the classifications from the classifier when passing in the camera stream as an argument:</p><pre>petcam = Camera.from_robot(robot, &quot;petcam&quot;)<br>puppyclassifier = VisionClient.from_robot(robot, &quot;puppyclassifier&quot;)<br>classifications = await puppyclassifier.get_classifications_from_camera(<br>    camera_name)</pre><p>Remove the existing code in the main function and replace it with the following logic where the the code gets classifications from the puppyclassifier based on the camera stream, and if a pet is found, activates the stepper motor using the <a href="https://python.viam.dev/autoapi/viam/components/motor/index.html#viam.components.motor.Motor.go_for">go_for() method</a> to move a certain amount to dispense a treat.</p><pre>async def main():<br>    robot = await connect()<br>    # robot components + services below, update these based on how you named<br>    # them in configuration<br>    pi = Board.from_robot(robot, &quot;pi&quot;)<br>    petcam = Camera.from_robot(robot, &quot;petcam&quot;)<br>    stepper = Motor.from_robot(robot, &quot;stepper&quot;)<br>    puppyclassifier = VisionClient.from_robot(robot, &quot;puppyclassifier&quot;)<br>    while True:<br>        # look if the camera is seeing the dog<br>        found = False<br>        classifications = await \<br>            puppyclassifier.get_classifications_from_camera(camera_name)<br>        for d in classifications:<br>            # check if the model is confident in the classification<br>            if d.confidence &gt; 0.7:<br>                print(d)<br>                if d.class_name.lower() == &quot;toastml&quot;:<br>                    print(&quot;This is Toast&quot;)<br>                    found = True<br>        if (found):<br>            # turn on the stepper motor<br>            print(&quot;giving snack&quot;)<br>            state = &quot;on&quot;<br>            await stepper.go_for(rpm=80, revolutions=2)<br>            # stops the treat from constantly being dispensed<br>            time.sleep(300)<br>        else:<br>            # turn off the stepper motor<br>            print(&quot;it&#39;s not the dog, no snacks&quot;)<br>            state = &quot;off&quot;<br>            await stepper.stop()<br>        await asyncio.sleep(5)<br>        # don&#39;t forget to close the robot when you&#39;re done!<br>        await robot.close()<br><br>if __name__ == &#39;__main__&#39;:<br>    asyncio.run(main())</pre><p>Save your file and run the code, put your pet in front of the robot to check it works:</p><pre>python3 main.py</pre><h3>Run the program automatically</h3><p>One more thing. Right now, you need to run the code manually every time you want your robot to work. However, you can configure Viam to automatically run your code as a <a href="https://docs.viam.com/build/configure/processes/">process</a>.</p><p>Navigate to the <strong>Config</strong> tab of your machine’s page in <a href="https://app.viam.com">the Viam app</a>. Click on the <strong>Processes</strong> subtab and navigate to the <strong>Create process</strong> menu.</p><p>Enter main as the process name and click <strong>Create process</strong>.</p><p>In the new process panel, enter python3 as the executable, main.py as the argument, and the working directory of your Raspberry Pi as /home/pi/petfeeder. Click on <strong>Add argument</strong>.</p><p>Click <strong>Save config</strong> in the bottom left corner of the screen.</p><p>Now your robot starts looking for your pet automatically once booted!</p><ul><li>Add speakers and record your voice so that the pet feeder can play a message to your pet each time it dispenses a treat.</li><li>Train an <a href="https://docs.viam.com/ml/">ML model</a> to recognize when your pet performs a trick, and withhold the treat until a specific trick is detected.</li><li>Add a button that your pet must press to access the treat. If you add several treat types, you might include a different color button for each treat type, allowing your pet to choose.</li><li>If you have multiple pets, you could configure different treats for each pet by training the ML model on each pet, and dispensing different treats depending on the pet recognized.</li></ul><h3>Full code</h3><pre>import asyncio<br>import os<br>import time<br>from viam.robot.client import RobotClient<br>from viam.rpc.dial import Credentials, DialOptions<br>from viam.components.board import Board<br>from viam.components.camera import Camera<br>from viam.components.motor import Motor<br>from viam.services.vision import VisionClient<br># these must be set, you can get them from your machine&#39;s &#39;Code sample&#39; tab<br>robot_api_key = os.getenv(&#39;ROBOT_API_KEY&#39;) or &#39;&#39;<br>robot_api_key_id = os.getenv(&#39;ROBOT_API_KEY_ID&#39;) or &#39;&#39;<br>robot_address = os.getenv(&#39;ROBOT_ADDRESS&#39;) or &#39;&#39;<br># change this if you named your camera differently in your robot configuration<br>camera_name = os.getenv(&#39;ROBOT_CAMERA&#39;) or &#39;petcam&#39;<br><br>async def connect():<br>    opts = RobotClient.Options.with_api_key(<br>      api_key=robot_api_key,<br>      api_key_id=robot_api_key_id<br>    )<br>    return await RobotClient.at_address(robot_address, opts)<br><br>async def main():<br>    robot = await connect()<br>    # robot components + services below, update these based on how you named<br>    # them in configuration<br>    pi = Board.from_robot(robot, &quot;pi&quot;)<br>    petcam = Camera.from_robot(robot, &quot;petcam&quot;)<br>    stepper = Motor.from_robot(robot, &quot;stepper&quot;)<br>    puppyclassifier = VisionClient.from_robot(robot, &quot;puppyclassifier&quot;)<br>    while True:<br>        # look if the camera is seeing the dog<br>        found = False<br>        classifications = await \<br>            puppyclassifier.get_classifications_from_camera(camera_name)<br>        for d in classifications:<br>            # check if the model is confident in the classification<br>            if d.confidence &gt; 0.7:<br>                print(d)<br>                if d.class_name.lower() == &quot;toastml&quot;:<br>                    print(&quot;This is Toast&quot;)<br>                    found = True<br>        if (found):<br>            # turn on the stepper motor<br>            print(&quot;giving snack&quot;)<br>            state = &quot;on&quot;<br>            await stepper.go_for(rpm=80, revolutions=2)<br>            # stops the treat from constantly being dispensed<br>            time.sleep(300)<br>        else:<br>            # turn off the stepper motor<br>            print(&quot;it&#39;s not the dog, no snacks&quot;)<br>            state = &quot;off&quot;<br>            await stepper.stop()<br>        await asyncio.sleep(5)<br>        # don&#39;t forget to close the robot when you&#39;re done!<br>        await robot.close()<br><br>if __name__ == &#39;__main__&#39;:<br>    asyncio.run(main())</pre><h3>Next steps</h3><p>Take your smart pet feeder to the next level! You could try one of the following:</p><ul><li>Add speakers and record your voice so that the pet feeder can play a message to your pet each time it dispenses a treat.</li><li>Train an <a href="https://docs.viam.com/ml/">ML model</a> to recognize when your pet performs a trick, and withhold the treat until a specific trick is detected.</li><li>Add a button that your pet must press to access the treat. If you add several treat types, you might include a different color button for each treat type, allowing your pet to choose.</li><li>If you have multiple pets, you could configure different treats for each pet by training the ML model on each pet, and dispensing different treats depending on the pet recognized.</li></ul><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=eee486dcee50" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Innovative smart home projects for a beginner in robotics 2024]]></title>
            <link>https://ariellemadeit.medium.com/innovative-smart-home-projects-for-a-beginner-in-robotics-2024-95963aa6d7a3?source=rss-4dd0706c51e6------2</link>
            <guid isPermaLink="false">https://medium.com/p/95963aa6d7a3</guid>
            <category><![CDATA[smart-home]]></category>
            <category><![CDATA[automation]]></category>
            <category><![CDATA[raspberry-pi]]></category>
            <category><![CDATA[machine-learning]]></category>
            <category><![CDATA[robotics]]></category>
            <dc:creator><![CDATA[Arielle Mella ]]></dc:creator>
            <pubDate>Wed, 24 Jan 2024 19:14:25 GMT</pubDate>
            <atom:updated>2024-12-31T19:23:13.763Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*SZ8PbzRcNHhIymcI.png" /><figcaption>Training a custom ML model on pictures of my pup</figcaption></figure><p>I have always been a fan of automation and home improvements, but time constraints and a reluctance to invest in off-the-shelf solutions that may not meet my specific needs held me back from exploring this interest further. What if there was an app that lets you build a smart device yourself, faster than it takes for a store-bought one to be shipped to your door?</p><p>In this blog post, you’ll find a list of DIY smart home projects using Viam, a smart device and robotics software, perfect for beginner robot building. They’re budget-friendly and easy to start with, especially if you’re just getting into robotics or have a hectic schedule.</p><h3>Create your very own smart pet feeder using machine learning (<a href="https://docs.viam.com/tutorials/projects/pet-treat-dispenser/">link</a>)</h3><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*BGB8dnMrIf2-p-IC.png" /><figcaption>Testing the smart device’s components from the Control tab in Viam’s app.</figcaption></figure><p>Imagine getting a few extra moments of sleep before you start your workday? With this diy robot, you’ll automate your pet’s morning meal and give them a few extra treats while away using computer vision and machine learning.</p><p><strong>Difficulty Level:</strong> Easy</p><p><strong>Viam Services &amp; Components: Shopping List: Steps:</strong></p><ul><li>Assemble all of your hardware components.</li><li>Configure and test your robot in the Viam app.</li><li>Put the pet feeder in front of your beloved pet and check your camera.</li><li>Start collecting pictures of your pet in the Data Manager</li><li>Create a dataset and tag all the pictures of your pet.</li><li>Train a model on your pet dataset.</li><li>Deploy your custom machine-learning model onto your robot.</li><li>Control the pet feeder with some code.</li><li>Watch your pet eat treats whenever the pet feeder sees your pet!</li></ul><h3>Make a facial verification system for home security (<a href="https://docs.viam.com/tutorials/projects/verification-system/">link</a>)</h3><p>‍</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/500/0*VMHU6lvwGN2W1z_c.png" /><figcaption>Image of the facial verification system in action.</figcaption></figure><p>Transform your home security with a DIY facial verification system. Ditch the limitations of off-the-shelf options and create a system that recognizes family and friends, allowing them access and even sends text alerts for doorbell rings.</p><p>Utilize the machine learning capabilities of the Viam platform to build a smart security system with just a board and a camera. It will intelligently disarm the alarm if it identifies an approved face after detecting someone. Simple, effective, and uniquely yours.</p><p><strong>Difficulty Level: </strong>Easy</p><p><strong>Viam Services &amp; Components: Shopping List: Steps:</strong></p><ul><li>Create a new machine in the Viam app and install viam-server on your new machine.</li><li>Configure your camera component and test it in the Control tab.</li><li>Set up your security camera in your desired location in your home.</li><li>Capture images of your family members and create a dataset for people you want to identify with your robot.</li><li>Train a model on your dataset.</li><li>Configure a facial detector using that trained model.</li><li>Configure a verification system and configure a transform camera.</li><li>Watch your verification system in action!</li></ul><h3>Make a drink-carrying robot for your house (<a href="https://docs.viam.com/tutorials/projects/tipsy/">link</a>)</h3><figure><img alt="" src="https://cdn-images-1.medium.com/max/500/0*B36Z4ItLKnb1q0pR.png" /><figcaption>Image of the robot Tipsy delivering drinks to guests at a party.</figcaption></figure><p>Getting up from your couch for a beverage is a thing of the past. Having a personal robot assistant that transports beverages, snacks, or pretty much whatever you want in between the rooms of your house is easier than you think.</p><p>With a rover base, camera, ultrasonic sensors, and batteries, you can have your very own smart drink-carrying robot in your home!</p><p><strong>Difficulty Level:</strong> Intermediate</p><p><strong>Viam Services &amp; Components: Steps</strong></p><ul><li>Set up your robot’s board and base</li><li>Configure the camera and ultrasonic sensor</li><li>Configure an ML Model Service to detect people and objects</li><li>Set up a detection camera</li><li>Write some robot logic that detects obstacles and moves the base around to your desired person</li><li>Enjoy your robot-delivered treats</li></ul><p>The integration of robots and smart machines into our homes is more than futuristic fantasy; it’s a current reality changing our daily lives.</p><p>Imagine the convenience of a drink-carrying robot that quenches your thirst, the security of a custom-tailored home system, or the ease of a smart pet feeder caring for your pet in your absence. These projects are not just fun but also great starting points for learning how to build simple robots with Viam.</p><p>Dive into the types of smart machines you can create and explore building your own robot. What will your first project be? Join Viam’s <a href="http://discord.gg/viam">online community</a>, draw inspiration from others, and share your creations to be featured on our socials!</p><p><em>Originally published at </em><a href="https://www.viam.com/post/innovative-smart-home-projects-for-a-beginner-in-robotics-2024"><em>https://www.viam.com</em></a><em>.</em></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=95963aa6d7a3" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Open source in an hour: Creating a community controlled robot using the Discord API]]></title>
            <link>https://ariellemadeit.medium.com/open-source-in-an-hour-creating-a-community-controlled-robot-using-discord-and-viam-c9d444881417?source=rss-4dd0706c51e6------2</link>
            <guid isPermaLink="false">https://medium.com/p/c9d444881417</guid>
            <category><![CDATA[robotics]]></category>
            <category><![CDATA[python]]></category>
            <category><![CDATA[open-source]]></category>
            <category><![CDATA[discord]]></category>
            <category><![CDATA[raspberry-pi]]></category>
            <dc:creator><![CDATA[Arielle Mella ]]></dc:creator>
            <pubDate>Wed, 17 Jan 2024 16:12:22 GMT</pubDate>
            <atom:updated>2026-01-15T17:53:43.197Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*koXwmdXPbBGlOwKFfFhlQQ.png" /><figcaption>A rover being controlled by Discord + code, generated by DALL-E/ChatGPT</figcaption></figure><p>I spend most of my day now managing the developer community for <a href="https://www.viam.com/resources/community?utm_source=viam-discord&amp;utm_medium=social&amp;utm_campaign=arielle-medium">Viam Robotics on Discord</a>, and hanging out with robotics enthusiasts and software engineers have given me a new perspective on the work I do as a Developer Advocate. Typically, my role involves crafting tutorials and demos to highlight our product’s features. After spending time with our community, I have been reflecting on how I can create more robot demos that can engage not just people in a room or someone reading a blog post, but how do I create something a whole online community can play with?</p><p>So, here’s a thought that’s been buzzing in my mind: How easy would it be to create a Robot Bot? Discord Bots are very common and pretty easy to make yourself thanks to awesome open source documentation in the <a href="https://discord.com/developers/docs/intro">Discord Developer Portal</a>. <a href="http://app.viam.com">Viam is an open source framework for building and programming robotics</a> with flexible SDKs, so it is easy to build other interfaces on top of them.</p><p>So here I am going to show you how to control a physical robot running <strong>viam-server</strong>, all through simple DIY Discord chat commands in less than an hour.</p><h3>Set up a real physical robot.</h3><ol><li>Build a robot. Yes, like an actual physical robot. I built a Viam <a href="https://www.viam.com/resources/rover">rover</a> with a wheeled base for simple movements.</li></ol><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*QmO2kssSPM5fxHRCLhWZ8A.png" /><figcaption>Image of an assembled Viam Rover</figcaption></figure><ol><li>Prepare a board for <strong>viam-server</strong> installation. <a href="https://docs.viam.com/get-started/installation/">You can check out how on the Viam Docs site</a>. I chose a Raspberry Pi for my robot. Create SSH credentials and log into your robot in your terminal.</li><li>Set up a new robot on <a href="https://app.viam.com/">app.viam.com</a>. <strong>If you don’t have an account, it is free to use and easy to sign up.</strong></li></ol><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*Zams4j6cle--ZrTB0AA26A.png" /><figcaption>Screenshot of a robot instace on app.viam.com</figcaption></figure><ol><li>Configure a robot, and test it in the control tab. I used a Viam rover, so setting up a <a href="https://docs.viam.com/get-started/try-viam/rover-resources/rover-tutorial-fragments/">pre-configured Fragment</a> took just a few seconds. Head to the Control tab and make sure all your parts are working and you can move your base and see the camera.</li></ol><figure><img alt="" src="https://cdn-images-1.medium.com/max/600/1*L8NJWfO8p9ntzwW5imDZ0A.gif" /><figcaption>A moving rover with a working camera being controlled in the Viam app</figcaption></figure><h3>Prepare your development environment.</h3><p>I used Python, and to manage Python packages efficiently I recommend creating a virtual environment.</p><ol><li>Create your project directory and make sure you are in the right folders. <strong>$ mkdir discordBot <br>$ cd discordBot</strong></li><li>Create your virtual environment and activate it. <br><strong>$ sudo apt-get install python3-venv<br>$ python3 -m venv virtualenvironment<br>$ source virtualenvironment/bin/activate</strong></li><li>Install the necessary packages. I used <a href="https://pypi.org/project/viam-sdk/"><strong>viam-sdk</strong></a> and <a href="https://pypi.org/project/discord.py/"><strong>discord.py</strong></a><br><strong>$ pip install viam-sdk<br>$ pip3 install discord.py</strong></li></ol><h3>Prepare your Discord Developer Configurations</h3><ol><li>Go to discord.com/developers/applications and select New Application. Name it and head to the Bot tab. I named this bot <strong>RobotBot</strong> (I know, very creative).</li></ol><figure><img alt="" src="https://cdn-images-1.medium.com/max/646/1*rRDIpw6F3uLsQF_Nlco2iA.png" /><figcaption>Discord Bot settings in the Developer Portal</figcaption></figure><ol><li>Enable Privileged Gateway Intents permissions for MESSAGE CONTENT INTENT. This means you’re allowing the bot to send and receive messages. Set desired bot permissions as well, I picked <strong>Send Messages.</strong></li></ol><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*FL42KU4HJfNqtxB4w9yS8Q.png" /></figure><ol><li>Head to OAuth2 tab and select OAuth2 URL Generator</li><li>Select <strong>Bot</strong>, and add specific bot permissions (<strong>Send Messages, Send TTS Messages, Manage Messages, Attach Files</strong>)</li><li>Copy the URL and go to the URL in the browser.</li><li>Add to authorized server</li></ol><figure><img alt="" src="https://cdn-images-1.medium.com/max/760/1*wF8gpe8Qm1zfEAocOKF3-Q.png" /><figcaption>Adding your Discord Bot to an authorized server</figcaption></figure><h3>Testing the Discord Bot</h3><ol><li>Go back to the Bot tab in the Discord Developer Portal and reveal your token and copy it.</li><li>Add to to the following script. This script is pretty boilerplate as a simple example of a Discord Chat Bot. I used <a href="https://www.freecodecamp.org/news/python-env-vars-how-to-get-an-environment-variable-in-python/">environment variables</a> to set the token.</li></ol><pre>import discord<br>import os <br>from discord.ext import commands<br><br>discord_token = os.getenv(&#39;DISCORD_TOKEN&#39;)<br><br>intents = discord.Intents.default()<br>intents.message_content = True<br>bot = commands.Bot(command_prefix=&#39;/&#39;, intents=intents)<br><br>@bot.command()<br>async def hello(ctx):<br>await ctx.send(&#39;hello from robotbot&#39;)<br>bot.run(discord_token)</pre><p>Run your code, your bot should go online in your Discord server.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/460/0*02m5wPKF5YVr7kvV" /><figcaption>Discord Bot going online in a server</figcaption></figure><p>Now say hello in the chat, your Discord bot will greet you back if the connections are all correct.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/550/0*DqyFrNBDRFZHlZyn" /><figcaption>Discord Bot responding</figcaption></figure><h3>Add your robot controls to the Discord Bot using Viam’s Python SDK</h3><p>Integrate Viam robot code into your discord bot. Create a bot command and corresponding robot movement. I grabbed the boilerplate code from the <a href="https://docs.viam.com/fleet/machines/#code-sample">Code Sample tab</a> in the Viam App, and removed components I was not using for the sake of this example. I am only using the <a href="https://docs.viam.com/build/program/apis/#base">Base API</a>.</p><p>Here are some examples of methods in the Base API that can help simply control my rover. I am going to use the MoveStraight() method.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*gxg6WlG7U0ol84fA-vwfPg.png" /><figcaption>Just a few of the methods in the Base API</figcaption></figure><p>I’ve used Environment Variables to set my API Key and Robot Address which are unique to your robot. Those can also be conveniently found in the Code Sample tab.</p><p>Create a forward() function (or really any movement you want) following the same logic. This logic can apply to any robot functionality you want to write using a Discord Bot to control a Viam robot. Here I am describing what is happening in <strong>async def forward(ctx):</strong></p><ol><li>Use the @bot.command decorator to start off your function. These decorators are used to register functions as bot commands, and they can’t be nested inside other functions. They should be used at the top level in your script.</li><li>Create a variable in scope of that function for your component you want to control.</li><li>Write the command response for the bot to auto reply in discord.</li><li>Write the function that moves the robot.</li></ol><pre>import discord<br>import asyncio<br>import os<br><br>from viam.robot.client import RobotClient<br>from viam.rpc.dial import Credentials, DialOptions<br>from viam.components.base import Base<br>from discord.ext import commands<br><br><br>api_key = os.getenv(&#39;API_KEY&#39;)<br>api_key_id = os.getenv(&#39;API_KEY_ID&#39;)<br>robot_address = os.getenv(&#39;ROBOT_ADDRESS&#39;)<br>discord_token = os.getenv(&#39;DISCORD_TOKEN&#39;)<br><br>intents = discord.Intents.default()<br>intents.message_content = True<br>bot = commands.Bot(command_prefix=&#39;/&#39;, intents=intents)<br><br>async def connect():<br>  opts = RobotClient.Options.with_api_key(<br>    api_key,<br>    api_key_id<br>  )<br>  return await RobotClient.at_address(robot_address, opts)<br><br>@bot.command()<br>async def forward(ctx):<br>  robot = await connect()<br>  viam_base = Base.from_robot(robot, &quot;viam_base&quot;)<br>  await ctx.send(&#39;i`m going forward&#39;)<br>  await viam_base.move_straight(distance=10, velocity=50)<br>  await viam_base.stop()<br><br>@bot.command()<br>async def hello(ctx):<br>  await ctx.send(&#39;hello from robotbot&#39;)<br> <br>bot.run(discord_token)</pre><p>Easy peasy. Run your code, and the bot will instantly go online. Type in your command in the chat</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/560/0*LulCOiNqJeFucgco" /></figure><p>Your robot should be moving!</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/600/1*9gr6Sl2GnpKv89H1fMAXhg.gif" /></figure><p>You can apply this logic to control any robot functions using Viam and a Discord Bot. This simple project was completed in under an hour, demonstrating the power of open source and the true flexibility of Viam’s SDK’s. For my next iteration of this project, I think it would be pretty fun and chaotic to let a server of thousands of people drive this rover around my house. Maybe I should create a <a href="https://docs.viam.com/mobility/motion/">SLAM map of my house and use Viam’s Motion Planning</a> to make this project a bit more robust.</p><p>Let me know if you try this! You can find me and our amazing developer community on <a href="https://www.viam.com/resources/community?utm_source=viam-discord&amp;utm_medium=social&amp;utm_campaign=arielle-medium">Discord</a> — tag your projects in #built-on-viam to be featured on our socials.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=c9d444881417" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Build Backstories: The autonomous car powered by mind control ]]></title>
            <link>https://ariellemadeit.medium.com/build-backstories-reimagining-spyder-the-autonomous-car-powered-by-your-mind-f5bd7d0ef1f9?source=rss-4dd0706c51e6------2</link>
            <guid isPermaLink="false">https://medium.com/p/f5bd7d0ef1f9</guid>
            <category><![CDATA[robotics]]></category>
            <category><![CDATA[engineering]]></category>
            <category><![CDATA[hackathons]]></category>
            <category><![CDATA[software]]></category>
            <category><![CDATA[projects]]></category>
            <dc:creator><![CDATA[Arielle Mella ]]></dc:creator>
            <pubDate>Mon, 13 Nov 2023 15:40:27 GMT</pubDate>
            <atom:updated>2026-01-15T17:50:43.243Z</atom:updated>
            <content:encoded><![CDATA[<p>I had the incredible opportunity to participate in <a href="https://hackthenorth.com/">Hack the North</a>, a prestigious student-led hackathon where students can spend a whole weekend designing innovative software and hardware projects.<br>In this blogpost, I’m shining a spotlight on one of the standout winners for the category “Best Use of Viam”, where Team <a href="https://devpost.com/software/spyder">Spyder</a> achieved their ambitious goal to <strong>transform an old motorized Audi Spyder toy car into a mind-controlled marvel.</strong> They ingeniously integrated the Viam API with a Neurosity Crown, enabling them to trigger robot protocols using the power of their mind.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*jMIBATs7QPcDY0zd.jpeg" /><figcaption><em>Spyder, the smart machine built in 32 hours, waiting to be controlled by Patrick’s mind.</em></figcaption></figure><p>To get a glimpse of how this was made, I interviewed the creators of this project-keep reading for more!</p><h3>Tell us about yourself and your team.</h3><p>We are Team Spyder! <strong>We’re made up of four 3rd year engineering students at the University of Waterloo </strong>who are experienced in both engineering competitions and hackathons. This year, we set out to compete in Hack the North 2023 for fun, learning something new, and this year, a win!</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*jTUZub5N-pNMfV96.jpeg" /><figcaption><em>Team members Patrick Kim (mechatronics engineering), Melda Kiziltan, Ari Wasch (computer engineering), and Aaditya Chaudhary with their build.</em></figcaption></figure><h3>Can you walk us through the initial concept of your build?</h3><p>The initial idea struck when leftover electronic parts and a toy car faced the trash at the student center. Why not turn this into something?</p><p>Inspired by Michael Reeves’ build, we fantasized about building a mind-controlled car, but aimed to make it even better.</p><p><em>Michael Reeve’s mind-controlled car: the inspiration behind Spyder.</em></p><p>We took the car apart to see if our idea was even feasible-can we modify this to match what we need? Would we have the right electronics to make this happen? <br>Turns out when you have a bunch of overexcited engineering students working on a project like this one, it doesn’t matter. We’ll make our own electronics.</p><h3>So, what does your build do?</h3><p>We made several mechanical and electronic modifications to the car to meet our specifications, including:</p><ul><li>Designing and 3D printing a custom rack and pinion gear to adjust the steering system, coupled with an additional motor for automated steering.</li><li>Upgrading the battery system for enhanced performance and durability.</li><li>Incorporating advanced electronics such as a Raspberry Pi 4, buck converter, LED controller, custom power distribution board made by Ari, and motor driver.</li></ul><p>Next was the software component. <strong>We used Viam to control the car electronics </strong>once they were all plugged in and soldered in. <br><strong>For the mind-control aspect, we harnessed the Neurosity Crown and its AI</strong>, training it with a dataset derived from Patrick’s brain waves. This approach enabled the car to respond to Patrick’s thoughts, moving forward when he intended it to.</p><p><em>Patrick using the Neurosity Crown to capture his brain signals.</em></p><p>We integrated the two software elements using scripts crafted by Melda, utilizing web sockets for connectivity.</p><h3>Can you walk us through step-by-step on how you used Viam for your build?</h3><p>We initially turned to Viam to validate our concept. Aaditya explored <a href="https://docs.viam.com/">Viam’s documentation</a>, employing a motor driver compatible with the Raspberry Pi. Following the guidelines provided, we connected it and leveraged the <a href="https://docs.viam.com/internals/rdk/#viam-server">Viam server</a> for system troubleshooting!</p><p>This step was crucial as it allowed us to confirm the integrity of our electrical system before writing any code.<strong> The Viam server was instrumental for our troubleshooting and sanity checks, saving us several hours</strong>.</p><p>It also enabled us to test our motors and mechanical systems to their limits. After validating our systems, we found the code generated by Viam and <a href="https://docs.viam.com/program/apis/">its SDKs </a>straightforward to integrate into our <a href="https://github.com/MeldaKiziltan/Spyder">codebase</a>.</p><h3>What were the most challenging aspects of your build?</h3><p>One of the most challenging aspects was getting the right data from the Neurosity Crown. Since the electrodes sit on top of the scalp, it makes it difficult to capture clear brain waves.</p><p>Think about it like you’re listening to a conversation happening in a room but you’re outside the closed door. It’s muffled and sometimes you can’t quite get all the words.</p><p>Eventually, we got it to work after gathering a few datasets and Patrick’s thoughts were properly recognized!</p><h3>What was the learning curve like for Viam’s platform?</h3><p>Viam’s platform and SDKs were very easy to pick up and use in our design process and codebase. <strong>Viam’s interface was simple and easy to use</strong>, but still packed with useful features. <strong>The SDKs were very straightforward</strong> and a quick read through the documentation was all it took to figure out how to effectively use them.</p><h3>Any plans to upgrade your current hackathon build?</h3><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*VTHth5qR-j2077Zc.jpeg" /><figcaption><em>Ari and Aaditya from Team Spyder working on their build.</em></figcaption></figure><p>We’re considering integrating a webcam for <a href="https://docs.viam.com/services/vision/">Computer Vision</a>, enabling the car to automatically slow down upon detecting humans in its path.</p><p>We also aim to enable simultaneous recognition of multiple commands, enhance our electromechanical systems to support a passenger in the car, and even introduce mind-controlled unlocking of the vehicle!<br>We have a few other ideas to use a <a href="https://www.viam.com/resources/rover">Viam Rover</a> for next year’s Hack the North project so stay tuned for that!</p><h3>Any final thoughts or words of wisdom you would like to share with aspiring builders and innovators?</h3><p><strong>Don’t be afraid to try something new!</strong> We initially started making this project as a joke. Who would ever think about creating a mind-controlled children’s car?</p><p>We had a lot of fun making this project and learning how to work with neuro-technology and Viam. The win is definitely the cherry on top of the whole experience, but ultimately, we came out of it as better engineers and created fun memories to go with it.</p><p><strong>Things won’t always go to plan. </strong>We had quite a few hiccups and had to pivot a few times, especially at the beginning, but it’s important that you try your best and go with it. <strong>Sometimes when things don’t work out initially, they end up becoming better.</strong></p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*jbr3Z76NPYbxgJau.jpeg" /><figcaption><em>A personal note to Melda following the completion of their build.</em></figcaption></figure><p>Ready to create your own smart machine? Dive into some Viam <a href="http://tutorials">tutorials</a> for immediate building, explore <a href="https://docs.viam.com/extend/">modular resources</a> to enhance any project with Viam, and join the online <a href="http://discord.gg/viam">community </a>for endless inspiration from innovative creations!”</p><p><em>Originally published at </em><a href="https://www.viam.com/post/build-backstories-reimagining-spyder-the-autonomous-car-powered-by-your-mind"><em>https://www.viam.com</em></a><em>.</em></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=f5bd7d0ef1f9" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Highlights and builds from Hack the North 2023]]></title>
            <link>https://ariellemadeit.medium.com/from-concept-to-creation-highlights-and-builds-from-hack-the-north-5375d2b3db96?source=rss-4dd0706c51e6------2</link>
            <guid isPermaLink="false">https://medium.com/p/5375d2b3db96</guid>
            <category><![CDATA[hardware]]></category>
            <category><![CDATA[robotics]]></category>
            <category><![CDATA[engineering]]></category>
            <category><![CDATA[hackathons]]></category>
            <category><![CDATA[software]]></category>
            <dc:creator><![CDATA[Arielle Mella ]]></dc:creator>
            <pubDate>Thu, 09 Nov 2023 20:34:10 GMT</pubDate>
            <atom:updated>2024-12-31T19:25:22.090Z</atom:updated>
            <content:encoded><![CDATA[<p>Just like geese heading north in the summer, we joined the flock and made our way to <a href="https://hackthenorth.com/">Hack the North</a> for a non-stop, 32-hour hackathon. Picture this: 1,000 students from everywhere, diving into events, workshops, games, and all sorts of cool stuff. It was what I’d deem “the unofficial music festival of hackathons.”</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*tbFQ7LNI1AU-vzBK.jpeg" /></figure><p>The challenge for attendees was simple: Make something amazing. And did they deliver! We’ve put together the best bits for you to check out and get inspired in the process.</p><h3>Entering Hack the North</h3><p>As a sponsor of Hack the North, we hosted a category within the competition-Best Use of Viam-giving students the opportunity to win special prizes and see the versatility of our software. They could use the platform for hardware configuration, software engineering, or a combination of both for our category.</p><p>To introduce them to the software, our team gave an API workshop titled “How to Bring Your Robotic Projects to Life,” which taught hackathon participants how to leverage the Viam platform to make their machines smarter.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*v3atjzxNfR5NdFmw.jpeg" /><figcaption><em>The rest of the Viam team and I who helped students during the 32-hour hackathon.</em></figcaption></figure><p>Through this demo, we:</p><ul><li>Introduced “<a href="https://docs.viam.com/tutorials/projects/tipsy/">Tipsy</a>,” our mobile rover, trained in computer vision and machine learning for autonomous navigation.</li><li>Created a fun, Pac-Man-inspired demo with team members in unique shirts, triggering Tipsy’s chase or escape.</li><li>Demonstrated Viam’s <a href="https://www.viam.com/product/machine-learning">machine learning</a> essentials, teaching students to train their models within the app.</li><li>Showcased how to tailor Computer Vision detections through coding and configuration.</li><li>Conducted a live coding session exploring our SDKs and real-time robot configuration.</li><li>Highlighted the Viam app’s seamless functionality, impressing students with its user-friendly interface.</li></ul><p>‍<br>The aftermath? Students and teams were inspired to use our platform to accelerate their projects-building smart machines faster within the hackathons short timeline.</p><p>With over 30 inquiries for <a href="https://www.viam.com/resources/rover">Viam Rover</a> development kits, six lucky groups were allowed to prototype using our rovers to help them realize their hackathon goals. See some of the highlights below.</p><h3>Seeing the projects come to life</h3><h4>Use the Force… or just one of our SDK’s</h4><p>Imagine using your thoughts to control a car-could that just be a Jedi mind trick? Or is it simply just the magic of open source integrations?</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*WcgG1nYJCn5MRJyK.jpeg" /><figcaption><em>Team Spyder demoing their project to the full Hack the North audience.</em></figcaption></figure><p>Spyder, aptly named after the toy car Audi the group reimagined, uses the <a href="https://pkg.go.dev/go.viam.com/rdk">Viam Golang SDK</a> and uses the <a href="https://neurosity.co/crown">Neurosity Crown</a> to take the brainwaves of an individual, train an AI model to detect and identify certain brainwave patterns, and output them as a recognizable output to humans. <br>The team collected brain electrical impulses, and forwarded those commands to the Viam interface to control the steering of the car. It makes you wonder, “could this be the future of autonomous vehicles?”</p><p>Dive deeper into this project through their <a href="https://devpost.com/software/spyder">Devpost</a> and <a href="https://github.com/MeldaKiziltan/Spyder">source code</a>.</p><h4>Another set of eyes for teachers in classrooms</h4><p>Meet Rezbot: your classroom’s intelligent ally streamlining daily tasks for teachers. From taking attendance to monitoring engagement, it keeps a finger on the pulse of every student’s presence and participation, transforming the way educators prepare lessons and assessments.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*4Mm6vU4leCehUAsS.jpeg" /><figcaption><em>The team behind Rezbot showcasing it’s features and functionality to Team Viam.</em></figcaption></figure><p>Rezbot is a web application that uses Computer Vision to identify students and automate the process of attendance. It can simulate dashboards for teachers to view statistics on certain students’ classroom habits.</p><p>This group used our Flutter SDK to create a web app where teachers can access a live camera stream from their Viam robot and collect image data using Computer Vision to feed data into these dashboards.</p><p>For more information on Rezbot, head to the team’s <a href="https://devpost.com/software/rezbot">Devpost</a>.</p><h4>Robotic paparazzi for our star hackers</h4><p>Next up on our roster of awesome projects that use Viam, Pic Perfect is the friendly robot companion here to replace that friend who always takes the worst Instagram photos.</p><p>Using a facial detection model, this smart machine tracks your every move, adjusting the robot to the ideal angle and distance so the subject is perfectly centered in the frame. With a simple thumbs-up gesture, it takes your photos and applies filters to produce perfectly crafted portraits which are then shown on a web app.</p><p>To build this smart machine, the group leveraged Viam’s <a href="https://python.viam.dev/">Python SDK</a> and used a Viam Rover as the base.</p><p>Discover more about this build through the team’s <a href="https://devpost.com/software/picture-perfect-oqgb92">Devpost</a>.</p><h3>Awarding the winners of our category</h3><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*6cDx26QSpKKp3n7u.jpeg" /><figcaption><em>Announcing the winners of our category for the hackathon.</em></figcaption></figure><p>As the weekend drew to a close, students took to the stage, showcasing their dedication and innovation during this intense hackathon.</p><p>Viam proudly presented the award for the “Best Use of Viam.” The winners, each receiving a fully equipped Viam Rover complete with a Raspberry Pi, a LIDAR, and an assortment of hardware, demonstrated exceptional skill and creativity.</p><p>We honored two victors in our category: “Spyder,” for its ingenious application of open-source technologies in tandem with Viam, and “Pic-Perfect,” for skillfully integrating machine learning models into their rover prototype and developing a fully functional front end for their robotics project.</p><h3>Reflecting on the event</h3><p>Thinking about the weekend, one thing became crystal clear: Viam’s platform, with its vast language support and flexibility, is an enabler of creativity. <br>Many students were surprised at how many languages we support, and the fact that projects were submitted utilizing different SDK’s is proof of just that. It showed that it doesn’t matter what type of engineer you are: there’s something for everyone on the Viam platform.</p><p>So, what’s the takeaway for you? Like the students at the hackathon, embrace the challenge of a timeline and try to build a fully functional project in just a weekend- <em>utilizing Viam to help of course</em>. Dive into our <a href="https://docs.viam.com/tutorials/">tutorials </a>and <a href="https://docs.viam.com/">documentation</a> to get started today.</p><p><em>Originally published at </em><a href="https://www.viam.com/post/from-concept-to-creation-highlights-and-builds-from-hack-the-north"><em>https://www.viam.com</em></a><em>.</em></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=5375d2b3db96" width="1" height="1" alt="">]]></content:encoded>
        </item>
    </channel>
</rss>