Bedlam
explores the dislocation and permutation of subjectivity by computation
and telematics. Bedlam is a telematic and teleoperative art installation
comprising telerobotics, multicamera machine vision, spatialised interactive
sound, video and web. Unlike most network experiments, Bedlam links,
not just computers and virtual environments, but real spatial locations
containing physically active people. This commitment to embodiment is
a critical experimental intervention in the development of wide bandwidth
multimodal networking.
Bedlam is an interdisciplinary project which models a novel cultural
environment from a complex of emerging technologies including pneumatics
and robotics, digital video systems, digital sound and network communication.
Bedlam is equal parts play, critique, creative and technological R+D.
It offers a critique of academic and popular discourses of cybernetics,
artificial intelligence, robotics, 'virtual reality' and ‘artificial
life’. It also constitutes experimental research in human computer
interaction. Bedlam proposes a model of telematic interaction which
actively critiques paradigms of computer-human interaction and of VR.
We emphasize full-body interaction in which the user, unencumbered by
hardware, training or highly symbolic interaction protocols, can drive
remote and local systems by the ongoing behavior of their entire body.
At each of two sites, a participant moves within an interaction stage
facing a coordinated array of hissing and clanking telerobotic prosthetics
actuated by 'pneumatic muscles', driven by data from the remote user’s
digitized 3D image. Video imagery mixed from the vision system cameras
and other video inputs is displayed on large screens flanking the robotic
installations at each site.
At ‘siteA’ the user stands within an ‘interaction
stage’, a roughly 10’ open walled cube. Beside this interaction
stage is a structure of custom robotic devices. Audiences at both sites
view the action from behind and beside the interaction stage. As the
users move, they generate real time 8 channel spatialized sound tightly
coupled to their movement and gesture. Data about the users movement
is passed to the remote ‘site B’. This data actuates the
robotic devices at ‘site B’. The robotic devices are vaguely
anthropomorphic, that is they may be reminiscent of animal or human
body parts, but they are not assembled in the form of a body. The dynamics
of their behavior however reflects the dynamics of the users behavior.
The user at site B moves in response to the behavior of the robotic
devices and creates local spatialized sound and data about his/her movement
is passed back to site A, actuating the robotic devices there. In this
way a highly mediated gestural communication loop is formed.
In an alternative interaction scheme, user at siteA influences or perturbs
the behavior of robots at siteA. This robot behavior is passed to robots
at site B, and vice versa. In this version, the robots are in a constant
feedback loop of communication, and that system is perturbed by human
users at both ends.
Unlike most interactive systems, our custom multi-camera machine vision
system allows for radically active behavior without any hardware or
tethers. Real time spatialized sound in each ‘interaction stage’
is generated by the real time 3D model of the user built by the vision
system.
‘Sound agents’ also share the same virtual space as the
user’s body model and behave sometimes in a completely autonomous
manner sometimes in direct response to the viewer’s actions, as
their coordinates in the virtual space are mapped onto 3D sound positioning
in the real space.
The general effect (for each participant and for on-site audiences)
is of a space of partial and quasi- identities in flux, which nonetheless
carries strong suggestions of a communicative loop between the two users,
mediated by network, robotic and media elements.