HCI, Research, User interfaces

Module+ Video Demo

After months of working on everything, the interface is now finally finished and the user tests have been run! It has been a long run of trying (and sometimes failing) to get things running as they should, so it is great to have finally got here.

Here is a conference-style video that was produced which shows the interface functioning in the user tests.

Module+ ( module-ette) is the final outcome of the past few months work. It is a modular tangible interface that has been designed as a tool for both music making and interactive sonification. Both of these tasks can be seen in this video, which documented some of the user tests.

Unfortunately, due to camera issues on the day, the full functionality could have been better documented.

If there are more questions about the interface, please do not hesitate to get in contact with me via this blog or any links provided.

HCI, User interfaces

Week 5 Update: Interface Designing

This week the weather has been consistently raining… which has made staying inside and reading articles a lot easier! Here is what I have been reading about this week:

Interaction Design and Human Computer Interfaces

Some interesting papers that I have read this week are from researchers at Queen Mary University in London and they have looked at the dynamics of musical interaction: both between performers in a collaborative environment and between the performer and instrument themselves which can be found here and here [1], [2]. Some interesting themes have emerged, such as the former paper’s aims of making a piece of technology that engages both novices and computer musicians alike. The latter also talks about technology’s need to be certain properties like intuitive, unobtrusive, enticing but also how some tensions can arise in this, e.g. designing something that is unobtrusive yet still enticing.

In these papers, they also mention one piece of highly interactive technology from a team at Universitat Pompeu Fabra in Barcelona that has been developed called the reacTable:

reactable_02_0
The reacTable* in action at a conference [3]
This is an example of one of many tabletop user interfaces and has been somewhat successful since its introduction in 2006. Next week I will look further into the conception and design of this and what has worked well and what hasn’t – so stay tuned for that! This also has given me an idea about using tangible interfaces for music production. I really like the modular nature of the table and how you can synthesize sound from the different “modules,” similar to the musical trinkets research that I discussed in an earlier post.

A lot of the papers that I have been reading have diverged away somewhat from my initial research in interactive sonification. However, one area of current research that interests me is human-computer interaction and specifically the ways that we as musicians interact with technology to enhance music composition and I might consider researching into user-interfacing and interaction and how it affects both sonification and music composition.

Choosing a microcontroller 

One of the biggest tasks that will come about from the work that I will be doing will be creating some sort of tangible interface (and hopefully testing this on unsuspecting members of the public). To do this, I need to consider the hardware requirements in my literature review also.

Having read through some of the literature online, I have had a chance to see how some of the technologies have been made. For this project, as my time is limited, using a micro controller (a small computer integrated into a single integrated circuit) might be a useful way to get a high level of functionality in a short space of time. Three main micro-controllers that can be used are the Arduino, Raspberry Pi and BBC’s micro:bit. The article below explores some of the main benefits/drawbacks of each.

———-

Also, on an aside, I got the chance to play with the sonicules game that has been developed by the team at the University of York. This is allows users to interact with the sonification of drug molecules. Part of this was using a 3D mouse to explore the virtual chemical environment. It was compelling to have a play with and see how others were interacting with the technology also. The idea of how intuitive a piece of technology was seen here in action when occasionally performing one movement with the mouse was counterintuitive to what some people would have thought and intuitive for others. Overall the game seemed really fun and intriguing and will definitely have some applications to the work that I might choose to do.

Screen Shot 2017-05-22 at 21.56.51
Trying to fit the target enzyme with sound: easier than having to do it visually

 

Let me know in the comments below if you have any opinions on which microcontroller might be best for an interactive system like this and what your opinions on the reacTable technology are!

 

References

[1] B. Bengler and N. Bryan-Kinns. “Designing collaborative musical experiences for broad audiences.” Proceedings of the 9th ACM Conference on Creativity & Cognition. ACM, 2013.

[2] J. G. Sheridan and N. Bryan-Kinns. “Designing for performative tangible interaction.” International Journal of Arts and Technology 1.3-4 , 2008, pp.288-308.

[3] S. Jordá et al.  “Reactable | Music Technology Group”. Available online: http://mtg.upf.edu/project/reactable Accessed: May 2017.

 

Research, sonification, User interfaces

Week 4 Update: Initial Research and Sonifications

This week, my aim is to try and narrow down the scope of the literature that I have been searching through and gain a more of a specific, cohesive idea to work with.

On Wednesday of this past week, one of the library staff gave a talk about how to effectively use search terms to maximise the best possible results. This was definitely a worthwhile talk to attend as it helped me realise the importance of having a clear set of search terms. To obtain these, it is necessary for me to narrow down my project to a set of searchable terms which can be hopefully be moulded into a title.

Refining the research

To do this, I decided to go back through and look at what I have been reading for the past few weeks and see if there had been anything that was standing out for me.  So far, I have read a fair number of introductory papers and highlighted some relevant sections. Grouping these together, I found that the main interests that I have had are:

  1. Interactivity: building a sonification system which has intuitive user interaction- such as that seen in many tangible interfaces.  What is it about a system that makes it intuitive? What research has been done on this and what could still be done?
  2. Multimodal perception in auditory data analysis: how can our sensory processing be used to create greater understanding of sonification.
  3. Spatial sound and spatial data: Using x-y-(z?) or spherical polar co-ords (r, thetaphi) to create user interface which mapped to some form of spatial sound?

Of these interests, I think I would like to pursue the ideas behind user interaction and tangible interfaces the most and so I have decided to limit my search to these articles. I’ve also read some engaging ideas of catering to the needs of both musical novices and experienced computer musicians using interactive technology [1], as this rings true especially useful for sonification systems as it might often be catered towards those with little/no prior musical experience, which I would like to include in my research somehow also. Hopefully now this narrower scope can help me progress quicker with my research.

Creating sonifications 

This week, I also had a go at creating my own sonifications of data, given the research that I had been doing. Given the current political climate I decided to use data provided by the electoral commission which detailed voter turnout in all constituencies in the 2015 UK General Election. To do this I mapped the data to midi note and then using the software “Pure Data” sent this data to a Digital Audio Workstation (Ableton Live) where the MIDI input was then converted to an audio output of a xylophone plug-in.  This sort of sonification is known as parameter mapping [2].

 

This was an interesting exercise in applying the knowledge that I have been learning about over the past few weeks into practice. This short piece of sonification was more useful for me to see how the process worked than to gain much insight from the data. Perhaps it might also be interesting to juxtapose this sonification with another audio track that sonified the voter turnout from the EU membership referendum from last year.

Screen Shot 2017-05-31 at 23.50.55

Screen Shot 2017-06-01 at 01.14.54
Above shows the midi notes generated from the sonification in Ableton Live, which in themselves provide a visualisation of the data and below shows the pure data patch that was used to implement it.

Furthermore, this piece does not provide any user interaction with the data, which is crucial to interactive sonification. Hopefully in the research that I will be undertaking over the next few weeks I will be able to apply both the work

Comment below what you think of my research ideas, the sonification piece and if there are any other data-sets that might be useful to try and sonify.

 

References 

[1] S. Saue,  “A model for interaction in exploratory sonification displays.” Georgia Institute of Technology, 2000.

[2] T. Hermann and A. Hunt, The Sonification Handbook. Berlin: Logos, 2011.

Research, sonification, User interfaces

Week 3 Update: I see what you mean/ I hear what you say

This week I will be discussing the main themes of what I have been researching in an attempt to narrow down my search. Once I have done this, hopefully I will be able to refine my title and create a more comprehensive search of the plethora of research papers out there regarding sonification.

Screen Shot 2017-06-01 at 01.32.33
Some light bedtime reading

Sonification as a data display tool

Why is it that our culture is so image obsessed? (I’m not talking about what you might think I’m talking about here).

Imagine you’re trying to explain a new concept to someone who knows little or nothing about it.. when they finally get what you’re talking about (depending on how good you are at explaining things ) they might say something along the lines of:

“Ooh.. I see what you mean now.”

But why should they see what you mean? Rather than hear what you say? When discussing new concepts, our initial ideas our usually visualized first. Some other common visualising phrases that have infiltrated our modern vernacular include:

“Let me look into that for you”

“Our company image reflects that of…”

“I have a vision for this project about sonification”

But what if there was another way to think about things? Or more importantly to this research.. what if there was a way to hear things?

Sonification has been around for a while now and there have been lots of useful applications of this. Here is an example of one very interesting TED talk which talks about the use of sonification in particle physics with data from the Large Hadron Collider:

This is just an example of one of the many ways which we might be able to use sonification to enrich our lives and aid our research. In this talk, Asquith’s sonification is using a technique known as parameter mapping [1]. However, I would like to further research areas of sonification which allows the user more interaction with the data than simply listening to it. Another form of sonification occurs in interactive sonification [1], where the user is allowed to interact with the data. This utilizes research from both the field of human-computer interaction and auditory data analysis. Specifically, I would like to look at the use of interfaces in such a system.

Tangible Audio Interfaces

This week I’ve been looking at some novel interfaces for musical exploration in an attempt to consider how to improve the field of user interaction in interactive sonification. One particularly interesting paper I looked at was making a tangible user interface called Musical Trinkets [2]. This was using passive resonant electromagnetically-coupled tags to create a musical controller from various odd-bits.

musical_trinkets
Paradiso and Hsiao’s Musical Trinkets: http://resenv.media.mit.edu/sweptRF.html

This could have interesting applications to a sonification system, where each “trinket” could somehow been related to a parameter in a dataset, or could be used to further explore data through the tactile interface. Over the next few weeks I will be exploring further some tangible audio interfaces.

Spatial Audio

Another interesting concept for sonification is the use of spatial audio. Spatial datasets can be spatialised for both spatial and non-spatial audio and similarly, non-spatial datasets can be for spatial and non-spatial audio [3].

Use of Head-Related Transfer Functions (HRTFs) containing information about interaural time and level differences (ITD and ILD) can be used to give a sound both azimuthal and elevation as seen below:

azimuthal_elevScreen Shot 2017-06-01 at 10.11.00

 

Using spatial sound can be a useful tool for sonification as it allows us to explore the data in new and exciting ways. This work can also then tie together the concepts of spatial audio and user interaction/interfacing.

All of this has left me with a lot of open doors, so join me again next Friday morning where my task is to try and hone the project in a bit further and get a bit more specific, with a view to creating a title for the project.

Let me know in the comments what you think of this week’s work ! See you next week, where I will be looking further into interactive sonification and user interaction.

References 

[1] T. Hermann and A. Hunt. “Guest editors’ introduction: An introduction to interactive sonification.” IEEE multimedia 12.2 (2005): pp.20-24.

[2] J. A. Paradiso et al. “Tangible music interfaces using passive magnetic tags.” Proceedings of the 2001 conference on New interfaces for musical expression. National University of Singapore, 2001.

[3] T. Nasir and J. C. Roberts. “Sonification of spatial data.” Georgia Institute of Technology, 2007.

 

Research, sonification, User interfaces

Week 2 Update: The Beginning of the Beginning.

After last term’s deadlines have now finally been completed, this week it was time to move on to the next stages of the MSc course: Project Development. On Thursday of last week I had an initial discussion with my academic supervisor to get the ball rolling, outlining the basics of the theory and background of the research that I will be conducting. Now it is up to me to go away and do some research! Although I have done some elements of research before in my previous degree it was to a lesser extent and this project will require high levels of project management and personal organisation.

Sonification

In my first supervision, we initially discussed one of the main premises that would underpin the work: sonification. None of the modules that I have studied so far in the course have studied sonification in any depth (although some have covered topics that will be relevant/transferrable) and so this week has been spent with the aim of learning more about sonification, what it does, how it works and what its potential uses might be.

Sonification is defined as the method of displaying data sonically, as an alternative to visualization where data is portrayed visually. Kramer [1] defined this more precisely to be:

“The use of  non-speech audio to convey information. More specifically, sonification is the transformation of data relations into perceived relations in an acoustic signal for the purposes of facilitating communication or interpretation.”

(Kramer, 1993)

It seems that sound can actually be a useful way of displaying data as many different properties can be modified to display change (e.g. frequency, amplitude, pitch) which will help the listener picture trends or correlation in the data. Below shows an example of a demonstration of sonification: this example turns the graphical representation of movement data into sound [2].

The motion is tracked using the jamoma module in Max/MSP and is then translated into sound. This is an interesting example as we can easily see how this corresponds to its visual counterpart. It also has interesting applications for those with visual impairment.

One of the areas of sonification research that was discussed in this preliminary meeting was the sonicules project  which looks at using sonification in chemical drug design.

Screen Shot 2017-05-01 at 23.27.08

This is an exciting and novel application of sonification. Displaying the enzymes’ binding sites are generally visually complex due to the convoluted nature of the molecule. Sonification can thus be an important tool to help rectify this. Another interesting aspect that can coincide with sonification is the ability to hear 3D sounds in space where it might be harder to do this through visualisation. This area might be something I would want to consider in my research, given the knowledge that I have acquired on rendering spatial audio using HRTFs (head-related transfer functions) which are used to characterise how an ear receives a sound from a point in space.

This also then led onto the idea of how one might create interactive user experiences in a sonification system, such as the sonicules project. Below shows an example of a tangible user interface that was built using magnetic tags. Creating something similar but for use in interaction with sonification could be an interesting idea to begin with? Next week, I will look at this form of interaction in more depth.

Where to next?

All of this background research that I have done has left me with a lot of intriguing ideas to investigate. I will be updating this blog on Friday mornings every week with my new findings and ideas and how the project is developing, so keep an eye out for that!

Let me know in the comments what your thoughts are on sonification and if you can think of any data that might be useful to sonify. See you next week!

References

[1] G. Kramer,  Auditory display: Sonification, audification, and auditory interfaces. New York, NY: Perseus Publishing, 1993.

[2] A. R. Jensenius. Motion-sound interaction using sonification based on motiongrams. In Proceedings of ACHI 2012: The Fifth International Conference on Advances in Computer-Human Interactions, pages 170–175. IARIA, 2012.