ESE441 and ESE442: ESE Senior Design,
David Magerman, Philip Farnum
NeoNur : A Nursing Apparatus for Premature Neonates
Authors: Leslie Chen
About half a million premature births-one-eighth of all live births occur in the United States each year. Researchers at the Children's Hospital of Philadelphia (CHOP) and University of Pennsylvania School of Nursing are currently analyzing the feeding behaviors of these premature babies. A prototype of the NeoNur has been developed to measure, process and store sucking pressures. Basic behavioral characteristics, such as maximum sucking pressure and number of sucking bursts, can be extracted from this raw data and viewed immediately after the feeding session. The raw data can also be uploaded to a PC for further analysis. The NeoNur design is simpler and more intuitive than the current CHOP system, and significantly decreases the time needed to assemble, disassemble and clean the apparatus.
Authors: Daniel Falcone
Recently many gyms have attached personal televisions to treadmills. This change has been made to improve the workout experience of the gym clients by providing the opportunity for entertainment during exercising.
The addition of personal televisions has resulted in a new issue regarding control of the televisions by the users. Conventional hand-held remote controls can interrupt exercise routines and prove bothersome to the user.
In the selected solution, a hands free system allows the user to control the functions of a television. The solution uses voice recognition and a microprocessor to send a corresponding signal to control a television.
The full implementation of this system allows for the control of all channels and television functions. The system is also robust enough to perform for users of different genders, ages, ethnic backgrounds and native languages; however, system performance decreases with poor pronunciation and pitch. Nevertheless, the current implementation of this system is adequately suited to control a television.
Authors: Anujit Shastri
Advisor: Van der Spiegel; Gruev; Yang
An important property of still picture cameras and video cameras is their ability to present an image that clearly depicts different objects. Normal cameras have the ability to discern between items in bright settings. When ambient light is very dim, pictures can be taken with a flash to illuminate the environment.
However, there are many scenarios where there is little light and the use of a flash is either prohibited or can compromise a position. Additionally, video cameras do not even have the option, as a flash cannot be triggered for every frame. In these situations, normal cameras lack the necessary technology to distinguish one object from another.
The approach this system takes to overcome the constraints of normal cameras is to process polarization information collected by three cameras. By placing three filters oriented at 0, 45, and 90 degrees on top of the cameras, polarization information is collected for each angle, resulting in total polarization knowledge for every pixel.
With the data collected, an overall image is generated through image recombination and various algorithms. This image displays far more information than that rendered by traditional cameras using only the intensity of light. The additional contrast provided results in the user being able to differentiate between two objects in low light situations that were previously indistinguishable.
Authors: Lisa Fleming
Advisor: Santiago; Jaggard
The blackboard continues to remain a classroom standard, allowing lecturers to visually communicate information and erase it as necessary for reusability. As society continues through the digital age, the convenience of storing notes electronically has become a more attractive means for preserving information.
Several products have been developed that attempt to integrate these two ideas, often incorporating a mounted projector with a computer to send and receive information. However, the implicit requirement of an overhead projector makes this a very costly alternative to the traditional blackboard. Furthermore, any extra embedded features offered, in an effort to justify cost, detract from the primary purpose of loading and storing written data.
In the chosen approach, a potentially cost-effective alternative is explored, making use of electromagnets within a self-contained device. Based on the superparamagnetic, luminescent properties of an iron oxide based nanomaterial, an array of electromagnetic coils generates magnetic fields that alter the color of the compound.
The hardware supporting each pixel is able to accomplish two main goals: detecting when the stylus has passed over a pixel, and then setting the current through the electromagnet to produce the correct magnetic field. A microcontroller, responsible for activating the coils, also stores the state of the array that can then be exported to an image file or redisplayed on the board.
The proof-of-concept prototype allows the user to draw using one of three colors, as well as erase. The size of the screen is three pixels by five pixels so that single digit numbers and most letters can be displayed.
Authors: Gabriel Kopin
There is a large amount of research today which involves real sensor networks with hundreds of nodes. It is very difficult to "see" and track what is going on at each node, which impedes the scientific progression. A solution to this problem is a programmable interface with interactive sensors which can be used as a generic visualization system for large data sets. The system could be used to simulate the sensor network routing schemes and see how the data disseminates across thousands of nodes, display which nodes are getting too much traffic, which have dying batteries, etc. The goal is to create an LED array programmed to visualize a set of data and respond to changes in sensor inputs – initially touch and, in a future implementation, sound and motion.
When a typical light emitting diode (LED) is forward-biased light is emitted. When the voltage difference across the poles is reversed, so that the high voltage is at the cathode and low voltage at the anode, it is reverse-biased and no light is generated. Meanwhile, LEDs act as normal diodes in that a small leakage current is produced across the diode junction when it is connected in reverse. In fact, the leakage current is directly proportional to the amount of incident light on the LED. By measuring the time the internal capacitance takes to discharge (i.e. change from a HIGH to LOW logic state) we can determine the extent of a user's interaction with a particular LED or region of LEDs.
Through the precise measurement of incident light, the generic visualization system is implemented as a touch-sensitize LED panel. By integrating multiple LED panels together a future implementation will be to implement an array of these LED panels to form the Interactive LED (iLED) Wall. iLED can then be used as a visualization tool for engineering research and boundless other applications – educational tool, games, or even as an aesthetic enhancement.
Authors: Leon Hermans
Tsanyu Jay Huang
In today's expanding business environment, conference call technology has become an integral tool in implementing
communications between remote offices. A limitation with the use of conference calls, however, is the inability to
distinguish multiple voices when several speakers are talking simultaneously through the conference phone, resulting
in confusion and inefficiency.
The goal of this project is to implement an adaptive noise filtering system capable of isolating the voices of two
speakers in a conference room setting in real-time with little to no delay. This will be accomplished through blind
source separation, a digital signal processing algorithm capable of performing the voice separation. It is
anticipated that the system will be able to process two live speakers and separate their individual voices in
approximately real-time, regardless of their orientation to the microphones.
Facial recognition and speaker verification systems have been widely used in the security field. In this area the systems have to be very accurate to prevent unauthorized users from accessing classified information. The extensive list of possible uses of these technologies in the commercial world has not been taken advantage of yet.
It is often difficult to remember the name of a person who is encountered out of context or infrequently. This situation can prove to be very embarrassing for the forgetful person. It can also be insulting to the person who is not remembered. The Personal Memory Assistant uses facial recognition and speaker identification to help avoid this situation.
A user discretely collects images and voice samples of the person to be identified. The facial recognition component analyzes the image to identify the three closest facial matches in the system. The speaker identification component does the same to identify the top two voice matches. The top ranked IDs are compared using an algorithm that was developed through testing. If the IDs match, a picture of the person and personal profile is displayed to the user. If no match is made, the user has the option to add the subject to the database. In addition to the identification process, the system also gives the option of searching for and updating entries in the database.
The Personal Memory Assistant will prove to be very useful not only as a memory backup but also as an organized database of aquantances. Users without memory trouble will find the system equally valuable.
Author: Elisa Downey-Zayas
Researchers in legged robotic applications increasingly desire real time body pose knowledge about the robots they program. Body pose knowledge enables more efficient and extensive repertoires of behavior. A variety of methods have been developed for rigid body pose estimation, yet none are suited to legged machines such as the educational robot called Edubot. The implemented approach is a method of extracting body pose information from either single, or multiple, acceleration readings.
One accelerometer located on the Edubot, capable of measuring acceleration in 3 degrees of freedom (DOF), enables static body pose estimation. Static body pose is estimated using the gravitational force of 1g towards the earth to determine how the relative forces on the X, Y, and Z axes of the accelerometer's Cartesian coordinate system relate to body pose. These estimates are based on the assumption that the static body position is independent of the velocity and thus only requires one set of readings . Dynamic body pose estimation is an extension of this concept; it takes into account the additional accelerations due to propulsion of the Edubot. In order to perform dynamic body pose estimation and fully describe the current acceleration of the entire body, the acceleration vectors of three or more distinct points are required.
The combination of one accelerometer and one microprocessor is sufficient to perform some specific body pose estimation maneuvers, but not all. For example, with information the Edubot is currently able to receive, from one accelerometer and microprocessor set, it can successfully and autonomously maneuver through a channel with inclined walls and rough terrain. Additionally, the same hardware configuration enables the Edubot to estimate the angle of incline at which its body is currently positioned in both the pitch and roll directions.
Additional software using the developed accelerometer suite hardware can yield dynamic body pose estimation. Multiple accelerometers in the form of a sensor suite would be necessary to perform this task with little change in the format of the software and firmware. The main task is to arrange the accelerometer suites independently and correctly calibrate the data readings from each source separately within the firmware. If this multi-unit system were to be completed more complex body pose estimation maneuvers could be performed, such as self-righting procedures.
Authors: Richard Prusak, Mike Kaplan, Abraham Dauhajre
In the music industry, there has been a recent trend of increased research and manufacturing with regard to self-tuning guitars. Although musicians can use devices such as chromatic tuners, which determine and display a string’s pitch using the guitar’s electrical output, these devices require the musician to manually tune the instrument. It was not until the release of Gibson’s Robot Guitar in late 2007 that a musician could own an automated tuning system for their electric guitar. However, it is currently only made in the style of Gibson Les Paul and SG guitars, leaving musicians with limited options if they wish to own a guitar where automated tuning is possible.
The focus of this project was to find a way to create an automated tuning system that was accurate, affordable and external. In addition, there was an intricate balance between those parameters and the amount of work required on the part of the user.
Using a laptop, the electrical impulses from the guitar’s humbucker pickup are recorded and a Fast Fourier Transform (FFT) is taken of the recorded file. Data about the recorded frequency, which string is being tuned and the desired tuning is then communicated to the microcontroller. Using this information, the microcontroller calculates how much rotation of the tuning pegs is required to tune the string, converts the value into a known number of encoder pulses and outputs a signal to the DC motor. By counting the pulses from the DC motor encoder, the microcontroller is aware of the motor’s position, allowing the microcontroller to stop the motor when the desired rotation is reached.
Since the most musically inclined person can, at best, detect a frequency change of approximately plus or minus 2Hz, the automated tuner must at least recreate this resolution. With the aforementioned method, the final product tuned a guitar in approximately 3.5 minutes with an average string resolution of ±1.5Hz
Authors: Arjun Batra
In the late 1990s, movie theatres came under severe criticism for their unwillingness and inability to make movies accessible to the hearing-impaired. Following an ethical firestorm and several lawsuits, movie theatres finally showed a greater desire to implement a technology that would provide subtitles for the hearing-impaired, while maintaining the viewing experience for the general audience.
Several technologies, including the Rear Window Captioning System co-developed by the Media Access Group and Rufus Seder, have been tested since then, but none fit the preferred cost and efficiency requirements put forth by the movie theatres.
DVD Genesis addresses these requirements by extracting subtitles from a PC's DVD player. The subtitles are then sent to an external display that can be mounted inside a seat¹s cup-holder or on its armrest. The display¹s arm can then be adjusted to position the display in the user¹s line of sight and at the bottom of the movie screen, giving the appearance of superimposed subtitles.
Research shows that by 2010 most movie theatres will have transitioned to digital media (MPEG-2), making DVD Genesis an extremely powerful solution for this environment. Additionally, DVD Genesis's unique, low-cost design and user-friendly interface also make it suitable for other environments.
Authors: Suman Addya
Advisor: Kimbrough Ungar
Keeping track of laps while swimming can be a difficult task. The idea for the Lapview watch was conceived with the intent of creating a simple device that could automatically count laps for a swimmer, with absolutely no manual intervention. The goal was to create a simple (and small) watch – dongle system which could communicate wirelessly. This would allow portability, convenience, and user friendliness, making Lapview a must have for every swimmer.
In order to create a device that could work in air as well as under water, magnetic communication was chosen as the technological direction. A dongle at the edge of the pool would transmit magnetic pulses for a short range of zero to five feet. A magnetic field sensor in the watch would detect the field every time the swimmer came within range, and communicate this information to the processor in the watch, allowing lap number and lap time to be calculated and displayed. The final implementation of the device implements an OLED display that incorporates lap count , lap time, and review of up to 99 previous laps. Three buttons on the side of the watch allow selection of modes, start/ stop, lap recall, and lap resetting. The hope is that the extremely affordable Lapview watch will become as standard a swimming accessory as goggles and trunks
Authors: Michael Costa
Advisors: Van der Spiegel
In both amateur and professional baseball, coaches monitor the performance of pitchers by counting the total number of pitches thrown in a game. This pitch count helps coaches prevent injury in younger pitchers, as well as track the expected fatigue of older and professional players throughout a game. The problem is that a straight pitch count does not factor in the weather, the type of pitches thrown, or the initial state of the pitcher.
A device that tracks the pitch count as well as balls, strikes, walks, strikeouts and the pitch per out ratio for each inning and in total would give coaches access to more information and allow them to evaluate actual pitcher performance. In addition, amateur baseball teams would be able to, like professional teams, collect statistics in real time for coaching decisions.
In the chosen approach, digital logic is implemented on a field-programmable gate array (FPGA) board. Multiple push buttons are used to input data. In order to keep the device user-friendly, the user only presses one button after the outcome of each pitch. These inputs increment the appropriate registers for the event. The logic also automatically accounts for the end of innings and recognizes every possible outcome of the current pitch. The statistics per inning and in total are outputted on 7-segment LED displays.
The full implementation of the system calculates and displays pitches, balls, strikes, walks, strikeouts, and pitches per out per inning and in total. It also allows the user to toggle through these statistics on multiple displays in order to compare data from multiple innings for tracking and analysis purposes.
ESE Home Page > ESE Undergraduate Labs > ESE 442 Home Page
> 2007-2008 Abstracts
ŠUniversity of Pennsylvania, Department of Electrical and Systems Engineering.
This site maintained and administered by Siddharth Deliwala, (deliwala_at_ee.upenn.edu ), Last updated Apr 18, 2008