Category Archives: DesignPlanning and Research

DPR2_Romanticar

romanticar

Project Name : Romanticar

Romanticar is the word that combine with ‘romantic’+’car’. Our project is for a couple who wants to propose. You can surprise your girlfriend with this car. This car also has a special fuction. It can follow a line. Bottom of a box, there is a light recognition sensor. So when you draw a line on the ground, it can follow your line. Also, at the very top of this box can be open. You can put a ring or special gift inside it. When the romanticar arrive to your girlfriend, you can open that box, and give a ring to your girlfriend. Our box design is motivated by the movie ‘The grand budapest hotel’. Mendl’s bakery box was our reference.

Resource :
-Arduino Leonardo
-3 light sensor
-foam board (for base)
-2 Wheels
-1 Front wheel
-gearbox
-screws
-soldering iron
-battery pack
-9V battery
-3 Boxes
-White paper tape
-Pink acrylic paint
-blue ribbon
-Loctite

< Process >

>gearbox , wheels

P20141108_175850948_B89ED001-9D83-4DB1-891B-E63CBEAFA404P20141108_175857589_510DB303-3A57-4773-949C-105C1EFC5981

P20141108_155903034_B9C532EB-7435-4BD1-9DA6-982F37934329 P20141108_160046705_E5369B0C-B83F-4070-A071-08D958F00724 P20141108_164048566_EDCBC5A9-B299-4BF3-A9F8-A1FBF329182F P20141108_172100898_01D4FB37-A20C-411E-8D72-09AECC13AD0DP20141108_170930788_229637BC-5E62-44EA-B46D-2E21F64F63E1


light sensor>

P20141112_112619805_6C9DB936-57F6-4D29-92A8-EF4F07177175P20141112_112539078_6D482566-64A8-42F4-9648-D49C223E7D56P20141112_120852169_441A13E9-6F41-494A-B5FA-32E7651460AE

Prototype>

We bought a kitty bus for a proto. We tested the sensor recognition.

P20141119_124008626_B8D21488-D76D-4E18-AD17-C10B9CEF1E13P20141119_124048010_BF8C28EB-B501-4A24-9003-7D581028FFBE

P20141119_114133426_F3ED3721-1E59-45E4-90EB-71D1B55938AF P20141119_114244465_97AE6063-E1C4-416D-BAFF-C5F2EBEC6066 P20141119_114938891_2ACBEE24-4A91-4CCA-B09E-70CFBC024748  P20141119_115019477_6F81E705-DB86-4562-971A-CD07F3F828A6 P20141119_115438851_F9629AE4-E0B8-4B09-93AC-297ABE13D861 P20141119_115454497_69427EC2-0B42-4018-8FAE-90D84AC52DB3

base prototype>

P20141126_113837665_29B0F4D2-D6E6-4776-B23F-B2145C288008P20141126_113756497_016B8C2D-9A17-4CE5-A435-62409CB867AE P20141126_120616451_4485DE7E-311D-436C-AB93-E43482F4D4F9 P20141126_125420087_E5E010CB-12BB-4B6F-BD40-676BEE4C3214

P20141126_125429855_F5C08522-E419-41AC-A9D4-64F4A96374B8 P20141126_125449322_42915A18-9C8D-4D79-84D7-FB71FC6928B1 P20141126_130011620_ACD73C97-4CCE-460E-BF3D-7F96978A5E31

>base laser cutting

P20141208_161853761_4A9BEDFE-D18F-4748-8C56-C85EA467F99D P20141208_163012000_9E6174D3-5EAD-4182-9EDE-11A8B4E9D883

P20141220_003539000_E6257541-69C1-4F51-B1E5-781CC6C32056 P20141210_112649452_312AB3CA-F401-415E-B06A-EF3B200C78B5 P20141210_112658026_7E57DBE0-FF73-466A-A1B2-BE94CDBB7F61 P20141210_112706197_0862EF95-EAA1-4904-A855-A24EFD554CD0 P20141213_151641738_9C0FB109-2C8C-44B4-A94B-63C0182C0D87

>Test

P20141213_165849421_49DD51DA-B52C-4606-A2BF-1B328CB234CB

>Fabricate

P20141215_211755992_B52EEC8E-6C0D-4467-88D6-DEE616699DEB P20141215_221704753_CE04019E-A998-45A2-8602-707E02DE9C4E

>Making

P20141216_231744068_EF4FBD04-DA44-4A94-BDE8-C1791640E54A P20141216_232021859_3F56190F-F639-4729-955F-5EB3599C3B54

>Finish

P20141220_003545000_022CA102-F237-4A13-9CF4-76C422B4E944P20141217_120817236_1025F098-EA7A-4622-8ED9-7D0945637879>.<

P20141217_120847425_99C5DD5B-3598-40D8-90FD-0CED7D3F44D5 P20141217_120825424_05BC63FF-1468-4609-84C7-7C09F58CD006

Conclusion :

I really learned lot from this project. At the final presentation, we failed to follow a line because of a power. First time when we test just with the base, it worked really well. But when we put the box on it, the weight was became heavy. We thought the boxes won’t weigh a lot, but it wasn’t. The motor needed more power. We had to reassemble the gear motor stronger. But it was too late that time. It was really sad, but I learned from this. I have to do a many, many test. I have to be more careful, precise person. Also I learned a new skill from this project! Laser cutting! It was very interesting experience to me, and I hope to use this skill later for my other projects! And the fablab was really great, too! 3D printer work was really impressive!

Thank you for teaching us, Professor Todd!!★

Advertisements

Raspberry Pi OpenCV Pan & Tilt Face Tracker

Create your own face tracking, pan and tilt camera on the Raspberry Pi!

This tutorial will demonstrate use of the OpenCV (computer vision) library to identify and track faces on the raspberry pi using two servos and a USB webcam. For the interested, I previously covered a more thorough overview of the installation of OpenCV from source here, however, I have found that the apt package is sufficient for all but the most bleeding edge of projects.

This project is based on the OpenCV face tracking example that comes along with the source-based distribution. In short, it performs face detection using haar-like features to analyze the video frame, and locates any faces present within it. In the live video window, identified faces are surrounded with a red rectangle. For additional information about how face detection works as well as the specifics of face detection with OpenCV, I recommend this article by Robin Hewitt.

Using the coordinates of the rectangle vertices, my script calculates the (X,Y) position of the center of the face. If the face is sufficiently on the left side of the screen, the pan servo will progressively rotate leftward, on the right side, rightward. This is likewise performed for the tilt servo as well, if the face is in the upper portion of the screen, it will pan upward, in the lower portion, pan downward. If the face is detected reasonably within the center of the image, no action is performed by the servos. This prevents unnecessary jitter once the camera has locked itself on the face.

http://mitchtech.net/raspberry-pi-servo-face-tracker/

what..?

Computer vision

Computer vision is a field that includes methods for acquiring, processing, analyzing, and understanding images and, in general, high-dimensional data from the real world in order to produce numerical or symbolic information, e.g., in the forms of decisions. A theme in the development of this field has been to duplicate the abilities of human vision by electronically perceiving and understanding an image. This image understanding can be seen as the disentangling of symbolic information from image data using models constructed with the aid of geometry, physics, statistics, and learning theory. Computer vision has also been described as the enterprise of automating and integrating a wide range of processes and representations for vision perception.

As a scientific discipline, computer vision is concerned with the theory behind artificial systems that extract information from images. The image data can take many forms, such as video sequences, views from multiple cameras, or multi-dimensional data from a medical scanner. As a technological discipline, computer vision seeks to apply its theories and models to the construction of computer vision systems.

Sub-domains of computer vision include scene reconstruction, event detection, video tracking, object recognition, learning, indexing, motion estimation, and image restoration.

Computer vision’s application areas include:
-Controlling processes, e.g., an industrial robot;
-Navigation, e.g., by an autonomous vehicle or mobile robot;
-Detecting events, e.g., for visual surveillance or people counting;
-Organizing information, e.g., for indexing databases of images and image sequences;
-Modeling objects or environments, e.g., medical image analysis or topographical modeling;
-Interaction, e.g., as the input to a device for computer-human interaction, and
-Automatic inspection, e.g., in manufacturing applications.

OpenCV (Open Source Computer Vision) is a library of programming functions mainly aimed at real-time computer vision, developed by Intel Russia research center in Nizhny Novgorod, and now supported by Willow Garage and Itseez. It is free for use under the open source BSD license. The library is cross-platform. It focuses mainly on real-time image processing. If the library finds Intel’s Integrated Performance Primitives on the system, it will use these proprietary optimized routines to accelerate itself.

OpenCV’s application areas include:
-2D and 3D feature toolkits
-Egomotion estimation
-Facial recognition system
-Gesture recognition
-Human–computer interaction (HCI)
-Mobile robotics
-Motion understanding
-Object identification
-Segmentation and Recognition
-Stereopsis Stereo vision: depth perception from 2 cameras
-Structure from motion (SFM)
-Motion tracking
-Augmented reality

Shopping Cart Drone Reference

Kinect Cart <Smart Cart Project>


Whole Foods 가 MS Kinect 기술을 기반으로 윈도우 태블릿, UPC 스캐너, RFID 리더, 음성인식을 적용한 스마트 카트(Smarter Cart) 프로젝트을 추진하고 있습니다.

Kinect의 모션캡처 방식을 활용하여 사람의 움직임을 인식할 수 있어 매장내에서 이동할때 마다 카트가 저절로 따라오도록 되어있으며 음성인식 기술을 적용하여 음성명령을 내릴 수 있도록 하였습니다. 그리고 상품을 카트에 담으면 카트내에 있는 스캐너 와 RFID리더기로 상품명 및 가격을 인식하여 쇼핑리스트를 작성하여 공유할 수 있도록 하였습니다. 상품 결제를 위하여 기다릴 필요없이 체크아웃대를 통과만 하면 자동으로 결제가 이루어지게 되어있습니다.

현재 MS는  Whole Foods  이외에 300여개의 업체들이 Kinect의 윈도우 플랫폼을 기반한 커머스 어플리케이션 프로젝트를 추진하고 있다고 합니다.  쇼핑고객의 행동을 3D로 트래킹하여 고객행동 패턴을 분석하고,  고객이 쇼윈도를 걸어가면 고객의 인식하여 실시간으로 고객에게 맞춤 상품을 제안하는 광고를 보여주며, 가상드레싱룸을 통하여 상품을체험할 수 있도록 하는등의 다양한 커머스 기반 서비스에 Kinect가 활용되고 있습니다. 고객구매접점에서 구매경험을 강화하기 위한 커머스 기반 플랫폼에서 Kinect을 기반한 MS가 앞으로 어떠한 전략을 주도적으로 전개해 나갈지 지켜봐야 할것 같습니다.

Whole Foods prototype puts Kinect on shopping cart, follows people around store

The quest for a high-tech “shopping cart of the future” is nothing new, but Whole Foods is planning to test a new spin on the concept, using Microsoft’s Kinect sensor for Windows. The motorized cart identifies a shopper with a loyalty card, follows the shopper around the store, scans items as they’re placed inside, marks them off the shopping list, and even checks the shopper out in the end.

Microsoft showed the very early prototype, being developed for Whole Foods by a third-party developer, Austin-based Chaotic Moon, during an event on the Redmond campus today, hosted by Craig Mundie, the company’s chief research and strategy officer.

The company says the project is literally weeks old, and that was apparent in the demo, which included a couple of false starts where the sensor didn’t precisely the shopper. The technology will need to be ironed out before it’s deployed, lest our shopping trips turn into destruction derbies.

But it’s an interesting application that shows what outside developers can do now that a Kinect software development kit has been released for Windows, expanding the sensor beyond the Xbox 360 game console.

Microsoft says more than 300 companies are working on commercial applications for Kinect on Windows. Other demos today included an application that gave an immersive virtual tour of a new vehicle, and another that let kids interact with a wildlife show.