Welcome to %s forums

BrainModular Users Forum

Login Register

NIME 2011 - Call for Participation

General Discussion about whatever fits..
Post Reply
manecante
Member
Posts: 71
Contact:

Unread post by manecante » 10 Jan 2011, 16:12

hello,
i forward this message who might interest some of you.

Dear NIME community,

11th International Conference on New Interfaces for Musical Expression (NIME 2011)
30 May - 1 June 2011, Oslo, Norway
http://www.nime2011.org

We would like to remind you that the deadline for submissions to this year’s NIME conference is only 3 weeks away. The core purpose of the NIME conference is to present the latest results in design, development, performance and analysis of/for/with new interfaces/instruments for musical use. In 2011 we will put an extra emphasis on performance aspects related to NIME, something which will also be addressed in a symposium, workshops and master classes in the days leading up to the conference.

We invite for the following types of submissions (see below for details):

- Paper (oral/poster/demo)
- Performance
- Performance Plus Paper
- Installation
- Workshop


IMPORTANT DATES

- Paper/performance/installation/workshop submission: 31 January 2011 (22:00 CET)
- Review notification: 18 March 2011
- Final paper deadline: 25 April 2011


TOPICS

- Novel controllers and interfaces for musical expression
- Novel musical instruments
- Augmented/hyper instruments
- Novel controllers for collaborative performance
- Interfaces for dance and physical expression
- Interactive game music
- Robotic music
- Interactive sound and multimedia installations
- Interactive sonification
- Sensor and actuator technologies
- Haptic and force feedback devices
- Interface protocols and data formats
- Motion, gesture and music
- Perceptual and cognitive issues
- Interactivity design and software tools
- Musical mapping strategies
- Performance analysis
- Performance rendering and generative algorithms
- Machine learning in performance
- Experiences with novel interfaces in live performance and composition
- Surveys of past work and stimulating ideas for future research
- Historical studies in twentieth-century instrument design
- Experiences with novel interfaces in education and entertainment
- Reports on student projects in the framework of NIME related courses
- Artistic, cultural, and social impact of NIME technology
- Bio-music
- Mobile music technologies
- Musical human-computer interaction
- Multimodal expressive interfaces
- Practice-based research approaches/methodologies/criticism
- NIME intersecting with game design
- Sonic interaction design


CALL FOR PAPERS

We welcome submissions on all above mentioned (and other) topics related to scientific research, development and artistic use of new interfaces for musical expression. There are three different paper submission categories:

- Full paper (up to 6 pages in proceedings, longer oral presentation, optional demonstration)
- Short paper/poster (up to 4 pages in proceedings, shorter oral presentation or poster, optional demonstration)
- Demonstration (up to 2 pages in proceedings)

All submissions will be subject to peer review by at least three international experts. Accepted papers will be published in the conference proceedings (with an ISSN).

Paper submission information
http://www.nime2011.org/submission/#papers


CALL FOR PERFORMANCES

We welcome submission of pieces for three different types of performance venues:

- Concert hall performance
- Club performance
- Foyer “stunt” performance

Any type of NIME performance pieces are welcome, but we would particularly like to encourage the use of motion capture techniques in performance. For this we can make available several different types of motion capture systems (Qualisys, XSens, Optitrack, Mega). Network pieces and mobile music pieces are also encouraged. Within reasonable limits, we may be able to provide musicians to perform pieces.

All submissions will be subject to peer review by at least three international experts. Documentation of the pieces will be made available on the NIME web page after the conference.

Performance submission information
http://www.nime2011.org/submission/#performances


CALL FOR PERFORMANCE PLUS PAPER

To support more cross-disciplinary work between scientific and artistic research, we highly encourage submissions of performance pieces related to papers. Here the scientific presentation may be the basis for the artistic presentation, or vice versa.

Submissions within this category will have to be done for both the piece and the paper, with a clear note that paper and piece belongs together. Evaluation will be based on the combined quality of both submissions.


CALL FOR INSTALLATIONS

We call for installations to be presented during the NIME conference. These may be foyer location installations or room-based installations. Within reasonable limits, we may be able to provide speakers, projectors, etc. for the installations.

Installation submission information
http://www.nime2011.org/submission/#installations


CALL FOR WORKSHOPS

We call for half-day (3 hours) or full-day (3+3 hours) workshops and tutorials. These can be targeted towards specialist techniques, platforms, hardware, software or pedagogical topics for the advancement of fellow NIME-ers and people with experience related to the topic. They can also be targeted towards visitors to the NIME community, novices/newbies, interested student participants, people from other fields, and members of the public getting to know the potential of NIME.

Workshop proposers should clearly indicate the audience and assumed knowledge of their intended participants to help us market to the appropriate audience. Workshops and tutorials can relate to, but are not limited to, the topics of the conference. This is a good opportunity to explore a specialised interest or interdisciplinary topic in depth with greater time for discourse, debate, collaboration.

Admission to workshops and tutorials will be charged separately from the main conference. Proposer(s) are responsible for publishing any workshop proceedings (if desired) and should engage in the promotion of their event amongst own networks. Workshops may be cancelled or combined if there is inadequate participation.

Workshop submission information
http://www.nime2011.org/submission/#workshops


If you have any inquiries, please email us at post@nime2011.org. Please feel free to forward to this message to others.

On behalf of the NIME 2011 committee,
Alexander Refsum Jensenius (University of Oslo)
Kjell Tore Innervik (Norwegian Academy of Music)
_______________________________________________
Max mailing list
Max@bek.no
https://mail.bek.no/mailman/listinfo/max

runagate
Member
Posts: 288
Location: Austin, Texas, USA
Contact:

Unread post by runagate » 10 Jan 2011, 19:26

Hehe - I live and breath "new musical interfaces" and I've been thinking about contributing (though I've rarely gotten to see more than an abstract of many of the papers - some of which need to get into VST-land and collaborate ASAP and an equal number remind me nothing so much as 5 minutes of lazy brainstorming at the beginning of a day of design and writing specs and such. I just wrote 27 handwritten pages of ranting about ideas for multimodal affordances in hopes that the used haptic servo-actuator/interface device I just got will get some custom software from my ADD-addled, far too busy to muck about with music app development coder partner, Armz, who nevertheless managed to get Drupal installed onto my webhost thingee.

Anyone know a good text editor type app with which I can easily embed semantic metadata for my ontology/taxonomy/topic maps and modal logic assertion sentence-level? Even just understanding how in hell to autogenerate subdomain names for hierarchical URIs would get me going a long ways towards publishing all these specs, notes and such I, anarchronistically, have mostly committed to paper this long 15 months of sabbatical away from making entheogenic timbre-transmutation music. Note: I am no programmer and though I theoretically get the idea behind camel case XML and all that other gobbledegook what I really need is a way to select text, right-click it and choose "embed metadata," scroll through a list of taxonomic hierarchical indices and just pop on the appropriate metadata hat from my custom-designed neologisms. I suppose not, eh? I only understand 50% of the preceding technobabble.

But though it's no big deal typing a bunch of words into a word processor I can't help but think that at this point instead of wasting a bunch of words on a pdf that's marginally comprehendable to any other human as so many of the scribblings I come across do. Many contain ridiculously brilliant ideas but are so couched in academic jargon that only their particular niche trafficks in that we in the world of... applied music, for lack of better term... don't ever get to reap the benefits of their hard work that gets tossed onto a shelf after its graded or presented at a pointy-headed people convention. Whereas, since metadata is an absolutely integral part of the functional features of my hardware/software master plan, I really should figure out how to embed it right from the get-go. I imagine it has something to do with iritatinly anti-mnemonic abbreviations betwixt angle brackets.

Then again, sans a software engineer chained to my desk, I imagine I'll have to be patient and make sure that the neologisms I've coined to make writing documentation take less than 10 lifetimes can be simply clicked on to display tooltips with definitions or clicked on to take the reader via the URI to the appropriate Wiki entry. I almost, almost figured out how to install my favored Web 3.0-flavored Wiki derivative. Hmm, I wonder if Office 2010 has something like this that I've previously overlooked when I didn't know the niche jargon yet?

You'd be stunned that, though such sweeping systemic changes need to take place in the land of Windows/VSTs/MIDI & Linux/LV2/OSC that it seems utterly impossible to dare hope for much progress (certainly we've not seen but little, tiny nudges here and there in start contrast to much of the rest of the technological world which is truly tragic as musicians are serendipitously involved in an activity which is uniquely constituted to benefit much, much more than almost any other endeavor... not least of which because most of our ADC is merely 2D and not 3D or more) but I swear to you that I've figured out a workaround via a total rehaul of much that we take for granted. It's worth it... you ever read the description of IRCAM's DSP crazyboxes they licensed to FLUX? They're about 2/3rds of the way to where I started out from 5 months before they were released. But what limitation do they butt up against? The same damned lack of >3DoF, humanly-cognicizable HMI, artistically-oriented affordances intelligently designed into a communications protocol which needs to be implemented someday if we're ever to get past the laughable limitations of MIDI or the uselessly-nonspecific and woefully anti-creative-type brain-drain that is OSC which is, granted, a networking protocol but its about as musical as html or LISP - so there's no way to really take advantage of their lovely dynamic spectral envelopes, for instance. Not even by hand-editing. I should know - I tend to have over 150 automation tracks per 2 to 5 minute project, limited largely by the lack of ability to justapose related channels of scored data, sorting through unlabeled, undifferentiated spaghetti of splines and such. Granted, in the past the sheer brute processing resources needed to even hope to audibly audition such a tangle of realtime parameter frenzy even with 15 seconds of prebuffering as all the RAM long since got capsized and the disk cache is just too slow so I saved up for a few years and just this week finally got a blazing fast SSD system disk, secondary SSD for streaming, caching and the like, a upgraded to 12GB of RAM. It took me seven years to do it but my little audio lab seemingly can deal with what I deemed necessary when I first set down acoustic noisemakers and refused to pick them up until I'd exceeded them in software. Well, I'm not there yet but due only to being poor, not particularly useful to others - especially not those who have marketable IT skills - and most irksome of all the odd reluctance of OEM solution suppliers to acknowledge small orders of their top end gizmo components to unknown persons. I honestly think it'd be easier to score high explosives and endangered species than it is to get one's hands on turnkey true multi-touch digitizer + display modules and a bunch of other stuff you've probably never heard of but would immediately go out and buy if the stuffed suits in corporations could expand their imaginations beyond cell phone trends and, of all things, information kiosks for conventions...

Now I'm going to go back to writing specs on how to teach one's embedded mobile wireless control surface how to collaboratively backpropagate an underlying subset of parameter value ranges with iterative User querying to infer what it is they'd like the underlying DSP signal chains to ultimately sound like at the business end of the DAC despite said User having only naive, subjective and impressionistic mental metaphors of the affordances, processors' inner workings and so on simply by means of their evokative interactions with the multimodal interfaces, folksonym and ranking metatags and exemplar patches/samples/&c. for comparators to converge upon a codeveloped agreement on the likely ontological centroids.

Cheers
- runagate in absentia
P.S. sorry for only occasionally popping up to rant but its the inevitable result of impotent daydreaming, feverish research and design and having engaged in febrile lucubration for the past 17 hour. OMG, is it after noon now here?

Post Reply

Who is online

Users browsing this forum: Google [Bot] and 96 guests