Maker Pro
Maker Pro

Can I create and inexpensive wearable proximity warning device for the visually impaired?

Xander7036

Oct 10, 2018
3
Joined
Oct 10, 2018
Messages
3
The goals for this device is to make it cost less than 50 dollars and design its functionality to be simple enough to allow for a short learning curve. It must also be able to withstand typical stress from daily use and wearable possibly on a pair of glasses or a belt
These are the materials I had in mind they are in no way set in stone if you can see a better way to build it then I can complete change this list. I am in particular looking for a way to expand the range to about two or three feet and find a way to use a rechargeable power source
PCB board, IR receiver, IR LED, BC557 Transistor, 2 100 ohm resistors, LED, 100k variable resistance, 5v 3a power supply, copper wiring, soldering iron (all materials may be subject to change)
 

kellys_eye

Jun 25, 2010
6,514
Joined
Jun 25, 2010
Messages
6,514
What do your components actually DO? How do they determine/detect proximity? Proximity to what?

If it is to be wearable, how do you presume to power it?
 

hevans1944

Hop - AC8NS
Jun 21, 2012
4,878
Joined
Jun 21, 2012
Messages
4,878
With the technology available today: a pair of CCD image sensors with pin-hole lenses, a really fast but inexpensive microcomputer, and a boatload of processor memory, it should be possible to develop a software real-time image processing solution, wearable and rechargeable battery powered. The two image sensors provide a stereo image-pair that can be processed to yield distance-to-target information for objects in front of the visually impaired. Further processing can be used to provide audible cues, possibly binaural cues, that help the user to determine how large, how close, and how fast proximate objects are to the user... an invaluable aid to crossing a street without a guide or crossing a room without tripping over objects on the floor.

I am somewhat surprised that the solution described above doesn't already exist as a marketable product. Maybe it does exist, but the price point just hasn't reached fifty dollars yet. The hardware computer capability that required a thousand square feet of floor space, needed a hundred kilowatts of electrical power, and several tons of air conditioning to cool the electronics in the middle of the last century will now easily fit on a thumbnail and run comfortably from a stack of 2032 lithium coin cells. Or maybe from a single rechargeable vape box mod.
 

(*steve*)

¡sǝpodᴉʇuɐ ǝɥʇ ɹɐǝɥd
Moderator
Jan 21, 2010
25,510
Joined
Jan 21, 2010
Messages
25,510
With those components, this is how I would do it. Bonus is that no additional power supply is required!

Wear a belt or a headband that has long pcbs sticking out of it. The wearer will get haptic feedback when they approach an object.
 

kellys_eye

Jun 25, 2010
6,514
Joined
Jun 25, 2010
Messages
6,514
This is one of those 'I want to make a s**t load of money but need 'your' knowledge to do it' threads.

Compare the OP's 'parts list' to Hop's description and note the vast gulf between them - and 'we're' supposed to fill in the gaps????
 

Xander7036

Oct 10, 2018
3
Joined
Oct 10, 2018
Messages
3
This is one of those 'I want to make a s**t load of money but need 'your' knowledge to do it' threads.

Compare the OP's 'parts list' to Hop's description and note the vast gulf between them - and 'we're' supposed to fill in the gaps????
I see where you might get that idea but I am actually just in the very early stages of developing my science fair project an I'm no engineering genius the most I can make is a very rudimentary proximity sensor with a warning distance about twelve inches. I could not really find anything online to help me past that point which led to me creating this thread.
 

Xander7036

Oct 10, 2018
3
Joined
Oct 10, 2018
Messages
3
With the technology available today: a pair of CCD image sensors with pin-hole lenses, a really fast but inexpensive microcomputer, and a boatload of processor memory, it should be possible to develop a software real-time image processing solution, wearable and rechargeable battery powered. The two image sensors provide a stereo image-pair that can be processed to yield distance-to-target information for objects in front of the visually impaired. Further processing can be used to provide audible cues, possibly binaural cues, that help the user to determine how large, how close, and how fast proximate objects are to the user... an invaluable aid to crossing a street without a guide or crossing a room without tripping over objects on the floor.

I am somewhat surprised that the solution described above doesn't already exist as a marketable product. Maybe it does exist, but the price point just hasn't reached fifty dollars yet. The hardware computer capability that required a thousand square feet of floor space, needed a hundred kilowatts of electrical power, and several tons of air conditioning to cool the electronics in the middle of the last century will now easily fit on a thumbnail and run comfortably from a stack of 2032 lithium coin cells. Or maybe from a single rechargeable vape box mod.
I love the ccd image sensor idea Is a great idea ill definitely use it and for the microcomputer I was thinking maybe an Arduino do you have any specific models or other microcomputers in mind that may be better as for the rechargeable batter I will most likely use the vape box mod if I can figure out a way. Also how large do you think the yield distance would be.
 

kellys_eye

Jun 25, 2010
6,514
Joined
Jun 25, 2010
Messages
6,514
developing my science fair project
ahhh... that helps. This is school-grade stuff then?

Look into a standard PIR detector device (your bog-standard yard light - in particular the sensor) and experiment with the sensor/filter/reflector to give the kind of coverage/detection range you want. This will only work for IR-radiative objects (humans, animals, KKK torch-bearers etc) but you might be able to figure out some form of 'resolution' by monitoring the PIR detector output i.e. size of object, direction etc.

Your resolution is limited (severely) by the detection method - IR is poor, ultrasonic is better, microwave better still etc but the increase in resolution brings corresponding increase in processing power required to deal with the content. This may devolve down to some seriously complicated programming - far beyond that of an Arduino based solution.
 

hevans1944

Hop - AC8NS
Jun 21, 2012
4,878
Joined
Jun 21, 2012
Messages
4,878
Sometime in the last century I was involved with the Defense Mapping Agency in St. Louis MO as an employee for a private contractor company that did highly classified work in support of the intelligence and reconnaissance community. One the DMA's missions (maybe it's only mission) is to provide very accurate digital maps for navigation and targeting by our defense forces. We won't go into how they acquire the imagery to make these maps, but the imagery is in the form of stereo pairs photographed at various altitudes.

In the later part of the 20th century digital terrain elevation data (DTED) was acquired by a human being who "flew" a dot over the imagery by peering into a pair of ocular eye pieces while attempting to keep the dot on the terrain, neither above nor below the ground. He or she moved the stereo-fused dot image in elevation using a foot-wheel that varied the optical separation (parallax) of the stereo image-pair, while the stereo viewer they were using to view the imagery moved the terrain images in a series of parallel straight lines. The "height" of the dot above the terrain was periodically recorded as a digital datum.

Variations of this type of machine, called digital stereo compilers, have been used for decades by cartographers to extract elevation data from stereo aerial photographs. Depending on known (measured) characteristics of camera optics and the altitude at which the images are acquired, height errors of the digital terrain elevation data (DTED) can be as little as a few inches with a highly skilled operator.

Operating a compiler is not an easy task. It takes a special kind of person to sit and peer through a stereo microscope for hours while "flying" the stereo-fused dot over (and on) the terrain. Much effort has gone into automating this task using computers to search the two image fields for corresponding points on the ground. Once a few of these points are found on the two images, the parallax that represents the height of other points in the image pairs can be calculated or interperlated from the known pairs of image points. This allows a DTED set to be built that models the actual terrain that was photographed.

But how accurate is the data set? The job that our company was involved in was to propose and build a "drop-in" add-on to DMA's existing stereo compilers (they have lots of them) that would superimpose (optically) the DTED data (displayed on a high-resolution, raster-scanned CRT) over the raw photographic images. The hope was this would visually reveal any glaring errors in the DTED. Our company had already delivered a set of work-stations that DMA personnel could use to view and edit DTED data, but this new approach sought to directly compare (using the human eye-brain synergy) DTED with the ground imagery from which it was compiled.

All this occurred in the late 1980s, when the microprocessor revolution was just beginning to replace dumb terminals, communicating with "Big Iron" mainframes, on desktops throughout the defense establishment. At that time they still held on to the idea that a central processor was necessary, so the smart terminals that replaced the dumb terminals didn't do any serious processing.

My company was a DEC (Digital Equipment Corporation) house using PDP-11 minicomputers and VAX-11/750 mainframes. Management either didn't see or failed to acknowledge the coming microprocessor revolution. They held fast to the belief that microprocessors, as typified by the IBM PC, were just "toy" computers with no real future. IBM must have thought so too because they got out of the microprocessor-based personal computer business. Fast forward twenty years to the 21st century where microprocessors are now dirt cheap and everywhere. A few people did see that coming. And a few others see where the human-machine-interface (HMI) is going, although very few will see just how far it will eventually evolve. We live in "interesting times," as the old Chinese curse goes.

I am actually just in the very early stages of developing my science fair project an I'm no engineering genius the most I can make is a very rudimentary proximity sensor with a warning distance about twelve inches.

A successful science fair project is typically 90% research and 10% presentation of the results of that research. A working prototype is nice to have, but not necessary if your presentation can simulate it or describe what is necessary to build it. Think animated audio-video presentations, fairly easy to do with software available today. But you have to have something to say!

Your science fair project could exploit the idea of 3D image processing as an aid for the visually impaired by describing exactly what is necessary to get there. What resolution do the CCD image sensors need, for example, to distinguish between a book left on the floor and a sleeping cat? What resolution is required to create enough parallax to determine range to objects up to five, ten, twenty feet or more away? What kind of software is needed to locate corresponding points in the two images in real time for range processing? What kind of image recognition software is needed to distinguish between cats and books? If given all the data you need from the processed stereo pairs, how do you present the analysis of this data to a visually impaired person quickly and unambiguously so as to guide their movements?

In my previous post # 3, I suggested a binaural stereo approach for the HMI, but others have also suggested haptic and other sensory input channels that might be less expensive or easier to implement. One the earliest researches into artificial vision used an array of vibrating reeds, strapped to the wearer's naked back, to convey a crude image of what a single video camera viewed. Such devices, in smaller form, might already be familiar to blind Braille readers, so a hand-held Braille transducer might be an appropriate HMI.

Remember, the purpose of a Science Fair is to demonstrate that you know how to do science. A gee-whiz prototype may look cool, but you should also show that you have invested some thought and time in research to solve whatever problem you think your prototype solves. It may turn out that the most promising solutions are waaay out of your limited time and budget constraints. That doesn't mean you should abandon them. It does mean you need to describe the difficulties that impede their implementation at this time.
 

(*steve*)

¡sǝpodᴉʇuɐ ǝɥʇ ɹɐǝɥd
Moderator
Jan 21, 2010
25,510
Joined
Jan 21, 2010
Messages
25,510
You can make life easier for you by scanning the scene with an IR laser and looking for the position of the bright spot in a pair of stereo images.

Of course there may be safety issues wearing a scanning laser in a room full of people (mostly if it stops scanning)
 

hevans1944

Hop - AC8NS
Jun 21, 2012
4,878
Joined
Jun 21, 2012
Messages
4,878
You can make life easier for you by scanning the scene with an IR laser and looking for the position of the bright spot in a pair of stereo images.
This approach is similar to how a "point cloud" 3D digitizer works, except stereo pairs are not used or created. The digitizer returns x,y,z co-ordinates of points on the surface of the scanned object. Importing this data into a parametric solid-model program, to allow eventual reproduction of the scanned object, is a major task effort and AFAIK is always done in a post-processor rather than in real time, and almost always with human intervention to ensure that the parametric model is properly constructed.

If you add a stereo imaging sensor to the IR scanner, that would certainly simplify data acquisition since points that are illuminated in both imaging sensors are, by definition, corresponding points. No fancy image processing algorithms required to find and correlate corresponding points, i.e., points in the stereo pairs that represent the same point on the actual 3D object. This has always been a major problem with automatic (computerized) stereo compilers that typically occurs when one of the two corresponding points in the stereo pair is obscured, say by a cloud, or occurs in a featureless area such as over water. The last time I looked into this, the only "solution" was to interpolate the obscured point-pairs from "known good" point-pairs, hardly a satisfactory solution when mapping areas with large bodies of water or considerable cloud cover. As you mentioned earlier, you sometimes need to move your scan to a more amenable spectrum. And this is what "sensor fusion" is all about, now that there are so many choices available to acquire "imagery," not just for mapping or cartography but for object classification, recognition and tracking. "Real soon now" we will need the help of an Artificial Intelligence (AI) just to sort through the fire-hose volume of data being generated. Multiple fire-hoses of data, actually.

Of course there may be safety issues wearing a scanning laser in a room full of people (mostly if it stops scanning)
There are always safety issues with any sort of active scanning where radiation is emitted. I doubt a personally worn IR (or any other wavelength) scanner would be practical. Imagine a whole room full of visually impaired people, each person equipped with an IR emitter scanning their environment. Passive sensors using existing illumination are more reasonable. From a military viewpoint (a lot of research in this area has military application and funding), passive sensors are more "stealthy" since they don't directly reveal where the observer is. Which doesn't rule out using laser target designators, but such is really only effective against unsophisticated targets. On the battlefield, bright emitters become targets for the enemy if they can "see" them. In the civilian world they just add to the background noise.:D
 
Top