Monthly Archives: October 2014

Raspberry Pi + PWM RGB LED Strip

This tutorial demonstrates how to easily use a Raspberry Pi to drive 12V RGB LED strips using Pulse Width Modulation (PWM). Out of the box, the Raspberry Pi has only one GPIO pin that is capable of pulse width modulation (PWM). However, thanks to the efforts of Richard Hirst and his servoblaster kernel module, standard GPIO pins can be used to perform PWM.

Note: The flashing of the LED strip due to PWM is only noticeable in the uploaded video; in reality, the colors progress smoothly without any visible flashing.

Hardware

Parts needed:
•Raspberry Pi
•3 x TIP120 power transistors
•RGB LED strip
•Perfboard/Breadboard
•Hook-up wire

http://mitchtech.net/category/tutorials/raspberry-pi/

maybe for lights…

Advertisements

Raspberry Pi OpenCV Pan & Tilt Face Tracker

Create your own face tracking, pan and tilt camera on the Raspberry Pi!

This tutorial will demonstrate use of the OpenCV (computer vision) library to identify and track faces on the raspberry pi using two servos and a USB webcam. For the interested, I previously covered a more thorough overview of the installation of OpenCV from source here, however, I have found that the apt package is sufficient for all but the most bleeding edge of projects.

This project is based on the OpenCV face tracking example that comes along with the source-based distribution. In short, it performs face detection using haar-like features to analyze the video frame, and locates any faces present within it. In the live video window, identified faces are surrounded with a red rectangle. For additional information about how face detection works as well as the specifics of face detection with OpenCV, I recommend this article by Robin Hewitt.

Using the coordinates of the rectangle vertices, my script calculates the (X,Y) position of the center of the face. If the face is sufficiently on the left side of the screen, the pan servo will progressively rotate leftward, on the right side, rightward. This is likewise performed for the tilt servo as well, if the face is in the upper portion of the screen, it will pan upward, in the lower portion, pan downward. If the face is detected reasonably within the center of the image, no action is performed by the servos. This prevents unnecessary jitter once the camera has locked itself on the face.

http://mitchtech.net/raspberry-pi-servo-face-tracker/

what..?

OpenCV on Raspberry Pi

OpenCV is a suite of powerful computer vision tools. Here is a quick overview of how I installed OpenCV on my Raspberry Pi with debian6-19-04-2012.  The guide is based on the official OpenCV Installation Guide on Debian and Ubuntu. Before you begin, make sure you have expanded your SD card to allow for the install of OpenCV. Its a big package with lots of dependencies. You can follow my instructions here.  Once you have expanded the SD card, open up a terminal and install the following packages:

http://letsmakerobots.com/node/36947
http://mitchtech.net/raspberry-pi-opencv/

Difficult…………

Computer vision

Computer vision is a field that includes methods for acquiring, processing, analyzing, and understanding images and, in general, high-dimensional data from the real world in order to produce numerical or symbolic information, e.g., in the forms of decisions. A theme in the development of this field has been to duplicate the abilities of human vision by electronically perceiving and understanding an image. This image understanding can be seen as the disentangling of symbolic information from image data using models constructed with the aid of geometry, physics, statistics, and learning theory. Computer vision has also been described as the enterprise of automating and integrating a wide range of processes and representations for vision perception.

As a scientific discipline, computer vision is concerned with the theory behind artificial systems that extract information from images. The image data can take many forms, such as video sequences, views from multiple cameras, or multi-dimensional data from a medical scanner. As a technological discipline, computer vision seeks to apply its theories and models to the construction of computer vision systems.

Sub-domains of computer vision include scene reconstruction, event detection, video tracking, object recognition, learning, indexing, motion estimation, and image restoration.

Computer vision’s application areas include:
-Controlling processes, e.g., an industrial robot;
-Navigation, e.g., by an autonomous vehicle or mobile robot;
-Detecting events, e.g., for visual surveillance or people counting;
-Organizing information, e.g., for indexing databases of images and image sequences;
-Modeling objects or environments, e.g., medical image analysis or topographical modeling;
-Interaction, e.g., as the input to a device for computer-human interaction, and
-Automatic inspection, e.g., in manufacturing applications.

OpenCV (Open Source Computer Vision) is a library of programming functions mainly aimed at real-time computer vision, developed by Intel Russia research center in Nizhny Novgorod, and now supported by Willow Garage and Itseez. It is free for use under the open source BSD license. The library is cross-platform. It focuses mainly on real-time image processing. If the library finds Intel’s Integrated Performance Primitives on the system, it will use these proprietary optimized routines to accelerate itself.

OpenCV’s application areas include:
-2D and 3D feature toolkits
-Egomotion estimation
-Facial recognition system
-Gesture recognition
-Human–computer interaction (HCI)
-Mobile robotics
-Motion understanding
-Object identification
-Segmentation and Recognition
-Stereopsis Stereo vision: depth perception from 2 cameras
-Structure from motion (SFM)
-Motion tracking
-Augmented reality

Shopping Cart Drone Reference

Kinect Cart <Smart Cart Project>


Whole Foods 가 MS Kinect 기술을 기반으로 윈도우 태블릿, UPC 스캐너, RFID 리더, 음성인식을 적용한 스마트 카트(Smarter Cart) 프로젝트을 추진하고 있습니다.

Kinect의 모션캡처 방식을 활용하여 사람의 움직임을 인식할 수 있어 매장내에서 이동할때 마다 카트가 저절로 따라오도록 되어있으며 음성인식 기술을 적용하여 음성명령을 내릴 수 있도록 하였습니다. 그리고 상품을 카트에 담으면 카트내에 있는 스캐너 와 RFID리더기로 상품명 및 가격을 인식하여 쇼핑리스트를 작성하여 공유할 수 있도록 하였습니다. 상품 결제를 위하여 기다릴 필요없이 체크아웃대를 통과만 하면 자동으로 결제가 이루어지게 되어있습니다.

현재 MS는  Whole Foods  이외에 300여개의 업체들이 Kinect의 윈도우 플랫폼을 기반한 커머스 어플리케이션 프로젝트를 추진하고 있다고 합니다.  쇼핑고객의 행동을 3D로 트래킹하여 고객행동 패턴을 분석하고,  고객이 쇼윈도를 걸어가면 고객의 인식하여 실시간으로 고객에게 맞춤 상품을 제안하는 광고를 보여주며, 가상드레싱룸을 통하여 상품을체험할 수 있도록 하는등의 다양한 커머스 기반 서비스에 Kinect가 활용되고 있습니다. 고객구매접점에서 구매경험을 강화하기 위한 커머스 기반 플랫폼에서 Kinect을 기반한 MS가 앞으로 어떠한 전략을 주도적으로 전개해 나갈지 지켜봐야 할것 같습니다.

Whole Foods prototype puts Kinect on shopping cart, follows people around store

The quest for a high-tech “shopping cart of the future” is nothing new, but Whole Foods is planning to test a new spin on the concept, using Microsoft’s Kinect sensor for Windows. The motorized cart identifies a shopper with a loyalty card, follows the shopper around the store, scans items as they’re placed inside, marks them off the shopping list, and even checks the shopper out in the end.

Microsoft showed the very early prototype, being developed for Whole Foods by a third-party developer, Austin-based Chaotic Moon, during an event on the Redmond campus today, hosted by Craig Mundie, the company’s chief research and strategy officer.

The company says the project is literally weeks old, and that was apparent in the demo, which included a couple of false starts where the sensor didn’t precisely the shopper. The technology will need to be ironed out before it’s deployed, lest our shopping trips turn into destruction derbies.

But it’s an interesting application that shows what outside developers can do now that a Kinect software development kit has been released for Windows, expanding the sensor beyond the Xbox 360 game console.

Microsoft says more than 300 companies are working on commercial applications for Kinect on Windows. Other demos today included an application that gave an immersive virtual tour of a new vehicle, and another that let kids interact with a wildlife show.