Dec 06

3×3 LED Array Prototype

Here we have our first prototype for of a 3×3 LED Array that will be used for investigation purposes of the arduino microcontroller and the possiblities of implemetation within the project 54 simulator. This device is the first generation of a new directional navigation device. Expanding on this idea my senior project team and I will create larger LED arrarys able to understand advanced navigation directions. Our plan is to first design and build a 4×8 LED array and eventually expand it to a 7×8 array. As part of the final design, the 7×8 will be installed and tested within the simulator enviornment.  Hopefully data gathered from the testing of this divice will conclude with increased “percent dwell time” (PDT) on the road instead of the driver looking at the a gps system elsewhere in the car such as the center console.

-Zachary Cook

Sep 28

Transcription and Some Cross Correlation

In one of my previous posts I demonstrated some software I have been working on at Project54. In the past weeks I have added a few features to this project.

Originally, the project could import the associated dialogue and display it in the correct time location. This was cool but the user could not edit the transcription, only view it. This is how the original transcription looked :


130.3656830 132.4688772 it’s_uh_a_body_of_water

132.7277318 133.5042958 a_river_boat

This format does not account for any lag in the participants speech (example: its . . . . . . . . . uh. . . a body of water), it is just one line of text before the next person spoke. To analyze speech, it would be important to have the ability to edit the transcription file as you are listening to it. The software now allows for editing and saving of the transcription file. This is demonstrated below.

Adding a New Phrase

The user uses a two-finger gesture to highlight the desired dialogue to be moved. One finger is then used within the highlighted area to drag the dialogue along the x-axis. Once the dialogue is exactly where the participant said it in time, the user drags the highlighted area down. That phrase is now separated from its original (phrase), represented by a green line.

Another new feature is cross correlation of the sound file and pupil diameter. Again this is done using the highlight capability. Once the area is highlighted, the user presses the blue button. A new graph area appears with the correlation of the two signals.

Cross Correlation

The correlation feature still needs a lot of work but the base algorithm is implemented.


~Nick Sjostrom

Aug 31

Google Crisis Map

After the disastrous oil spill in the Gulf of Mexico the National Oceanic and Atmospheric Administration(NOAA) developed the Environmental Response Management Application(ERMA). ERMA provides numerous datasets as overlays on Google maps, providing useful information for response organizations.

Now with tropical storm Isaac making its way through the Gulf Coast, Google created an application with the same purpose. The Crisis Map provides the same functionality as ERMA but for Hurricane Isaac. Some of the Crisis Map’s data is even provided by NOAA.

It seems Google has teamed up with many other organizations with their project Google Crisis Response. Since hurricane Katrina hit in 2005 Google has since been responding to natural disasters by making useful information available to responders. On the site they provide numerous tools such as Google Public Alerts and Google Person Finder to help the public help themselves.

Google Crisis Map

After looking through the list of Google’s tools on the Crisis page I decided to look into Custom Google Maps. It turns out that Google provides an excellent API for their Maps. All you need is a free key provided by Google and you can add a Google Map to your web page! Here is the Javascript with a few options added:

 <script type=”text/javascript” src=””> </script>

<script type=”text/javascript”>

function initialize() {

var mapOptions = {

center: new google.maps.LatLng(43.076712,-70.762072),

zoom: 8,

mapTypeId: google.maps.MapTypeId.SATELLITE


var map = new google.maps.Map(document.getElementById(“div_id”),




That’s it!!!!!

The API documentation includes detailed instructions on how to customize your map and even add overlays, like the ones in the Crisis Map. I obtained a key of my own and I am pretty excited to play around with this.

It’s great to see such a powerful company use their resources to help people. My faith in Google seems to increase every day.

Jul 27

Visualizing Sound Files on A Multitouch Surface

For most of the summer now I have been developing software to allow for the visualization of sound files.  Other products such as Wavesurfer and Audacity already accomplish this task along with other capabilities. What differs between those programs and the one I have been working on is its use is for the Microsoft Surface. Bringing this type of technology to the Surface will allow groups to analyze dialogue(and perhaps other types of data) on a large multi-touch surface.

The main purpose of this program is to import a sound file(.wav) along with the dialogue associated with it. Below you can see the interface with a visualized sound file of a conversation, as well as the measured pupil diameter of one of the participants(lower graph). This conversation was part of another experiment in which two participants played a game of taboo over voice chat while one of them was using the driving simulator here at Project54.

Dialogue Visualization

Dialogue Visualization_02

Use scroll bar to navigate through file


You can also see a quick video of the software’s capabilities. A user can play, pause, and stop the sound. Also, the transcription file from the recorded dialogue can be displayed within the visualization. The purposed use for this is to visually analyze the relationship between the spoken tasks and pupil diameter(cognitive load).

As you can see, the software’s capabilities are very limited and there is much more to do. This project is in active development so there will be more to come!

~Nick Sjostrom

Jul 26

Extracting data from Matlab figures

Has it ever happened to you that you have a Matlab figure, but you forgot to save the corresponding data? It happened to me more than once and each time I have to remind myself how to get the data back. Finally, I decided to save the knowledge here for anyone who may run into the same problem.

The good thing about Matlab figures is that they hold all the data and it can be extracted fairly easily. I will cover extracting line data here, which is the most common. However, the procedure can be easily modified to other data types as well.

The figure below shows an example of multiple line graphs.

The code below can be used to extract all three sine functions from the above figure:

%get the handle of the current figure
fig = gcf;

%get the handles of the active axes
axes = get(fig, ‘Children’);

%get the handles of the data objects associated with each axis
data = get(axes, ‘Children’);

%these cell variables will hold the extracted data
x_data = {};
y_data = {};

%go through all line objects and extract X and Y data
for i=1:length(data)
    object_type = get(data{i}, ‘Type’);
    current_data = data{i};
    if iscell(object_type)
        for j=1:length(object_type)
            %check whether the current object matches the desired data type
            if strcmp(object_type{j}, ‘line’)
                %save extracted data
                x_data = vertcat(x_data, get(current_data(j), ‘XData’));
                y_data = vertcat(y_data, get(current_data(j), ‘YData’));
        %check whether the current object matches the desired data type
        if strcmp(object_type, ‘line’)
            %save extracted data
            x_data = vertcat(x_data, get(current_data, ‘XData’));
            y_data = vertcat(y_data, get(current_data, ‘YData’));

If it is desired to extract other data types that may be present in the figure, it is necessary to change the second parameter in both “strcmp” functions and possibly how the data should be stored (for example, if the data represents a two-dimensional matrix as opposed vectors in this example).

Zeljko Medenica

Jul 19

KEEPERS – Inspired to Become Engineers at a very young age

After a morning workshop about electricity, KEEPERS campers continued to have fun while learning about some of the applications of electrical and computer engineering in the UNH-ECE driving simulator lab. Today, Oskar Palinko and I hosted a group of about 25 KEEPERS campers. To ensure that every KEEPERS camper had an opportunity to learn about and interact with our driving simulator, we divided them into small teams. Each kid from a group of four or five KEEPERS campers had his/her chance to see and play with the driving simulator. The following pictures certainly show how KEEPERS kids had so much fun in our driving simulator lab.

The smile on their faces really explains their enthusiasm about the driving simulator and engineering applications, in general! “Wow, this is the most exciting thing I’ve ever tried so far! I want to become an electrical engineer so that I can be able to design cool stuff like this, too!” said one of the KEEPERS kids after his first driving experience in the simulator.

The driving simulator can really excite kids’ interest in considering engineering as their major and career when they grow up! What a great joy for these kids to see at their very young age how engineering theories are applied in real-world life to solve some of our community’s problems, for example, drivers’ distraction!

Patrick Nsengiyumva

Jul 17

Optimizing Graphs on The Microsoft Surface

Recently, while working on a project to visualize .wav files on the Microsoft Surface, I ran into an issue regarding the canvas controls size. While developing this software I am testing with a .wav file about 4 minutes long. Visualizing this file creates a canvas with a width of about 20,000 pixels . . . a fairly large canvas and it will need to support much longer files. I am using a ScrollViewer to maintain a view port over the canvas so a user can scroll through the .wav file. So, the issue that I ran into was that after drawing the .wav file (using Polyline) on the canvas, scrolling across the canvas has significant lag. I already had separate canvases drawing different things in order to maintain the graph while scrolling so, at first, I didn’t think there was much I could do. Here is a little info-graphic for the design:


While playing around with the program I realized that the grid I was drawing in the background did not have to be the entire width of the .wav graph, it only needed to be the width of the viewport and made to be at a static position:


Now, the grid stays within the view port while everything else scrolls past. This small change made an incredible difference. The grid was put into yet another canvas (I already have 4 placed on top of each other).  Although the grid lines are thin and a light shade, they obviously require a significant amount of memory to draw and display. I believe I will also be able to apply this technique to a few other aspects of the design.

Here is a video comparison of before and after the simple optimization.



There is pretty obvious difference – lots of lines means lots of resources.

It may be even better to just make an image of the grid and use that as the background, instead of drawing them every time. I will test this next.

~Nick Sjostrom

May 22

Writing Code in Your Free Time

Practice makes perfect and I recently discovered a perfect website to assist in coding practice:

This site contains somewhere around 380, math related, programming problems that are open to solve. Once you create an account, the site tracks your progression through the problems. You also gain awards, and can view friends’ progress(I would think this makes it a little competitive).


This is a great way to exercise some programming skills (as well as math skills) for the language of your choice. The problems are not too difficult, although I have only completed 8 and I am sure they will become pretty hard. Check it out to kill some free time, especially now that summer is here!

Make sure to add my friend key(85212247340548_03cdfb50a5f1e2ea8c41ab4b87bbab11) and we can race through the problems!

Apr 21

Real-Time Pupil Diameter Based Cognitive Load Estimation

One of the least intrusive methods for assessing cognitive load is measuring pupil diameter change using remote eye tracking. Because of the effect of task evoked pupillary response (TEPR), the pupil will dilate when a person is faced with a challenging cognitive task. But the pupil’s  diameter will also change due to change in lighting conditions (pupillary light reflex).

In related research it was shown that by predicting the pupil reaction due to illumination, the cognitive load can be estimated.

Since the quality of cognitive load estimation directly depends on the quality of  the pupil reaction prediction due to illumination, in recent work prediction algorithm was improved by:

  • filtering high frequency sensor noise;
  • eliminating gaze lag;
  • modeling pupil dilation and contraction transfer functions for each subject individually (rather than finding universal transfer function);

The result of the improved estimation algorithm can be seen in Figures 1. and 2. When no cognitive task is introduced to subject, and pupil changes only due to changes in scene illumination, it can be seen (Figure 1) that prediction correlates to the signal obtained by eye tracker. In comparison to the prediction output, it can be seen (Figure 2) that pupil significantly dilates after the moment of imposed cognitive task.

Figure 1.  Measured data and prediction without cognitive task
Figure 2. Measured data and prediction with cognitive task

Comparison between estimated TEPR and TEPR measured in experiment where subjects were exposed only to cognitive task, is shown in Figure 3. The correlation coefficient for these two signals is 78% can be considered to be high match.

Figure 3. Cognitive task and cognitive load estimation

As pupil diameter based cognitive load estimation relies on good prediction of the pupil reaction due to illumination conditions, this improved prediction algorithm makes real-time estimation (Figure 2) successful.

Apr 17

Springtime abroad!

Spring has come to Budapest and things continue to get more beautiful!

A semester in Budapest sure keeps you busy.  It’s been hard to make time for a blog.  The four of us here are having a truly amazing time in Europe!

We are finding out that it is rewarding to be here and travel everywhere, but it isn’t any less difficult when trying to balance all the traveling with academia.  It’s very fast paced here and a lot more is thrown at the students at once because the lectures are two hour blocks twice a week for each class.  It’s hard to believe the semester is about 3 quarters over.  The four of us have been to at least 8 different countries total.  It started with Vienna, Austria then Bratislava, Slovakia.  After that the four New Hampshirites split to two groups and two went to Prague, Czech Republic with the student group the Erasmus Student Network who organized the trip, while the others visited friends in Krakow, Poland.  Most recently, the two who didn’t go to Prague with the ESN went to Prague over Easter weekend which was a long weekend for us, while one went to England to visit family and I went to Italy, met a friend of mine studying abroad there, and with them went to Switzerland!

Hopefully that wasn’t so confusing… Needless to say we’ve wasted no time getting to explore Europe and represent the U.S. and UNH abroad.  Another place we were able to visit was Serbia and one of the best shots of the group was taken there and is below.  We are settling down from traveling for the time being and doing some studying for some upcoming exams.  Photos will be on Flickr shortly.


__Sam Batton

Outside the Belgrade fortress in Belgrade, Serbia

Older posts «

» Newer posts