Illustration by Angela Torchio
GitHub Repository: https://github.com/npuckett/p5-phone
P5-phone bridges the gap between the existing software functions inside p5.js and the realities of contemporary mobile browsers to enable phones to be used for experimental interactions. This is done both through addition and subtraction. Firstly, it streamlines gaining access to phone sensors so that their data can be used inside p5. Secondly, it disables default gestures so that you can create your own.
<script src="https://cdn.jsdelivr.net/npm/p5-phone@1.6.4/dist/p5-phone.min.js"></script>
<script src="https://cdn.jsdelivr.net/npm/p5-phone@1.6.4/dist/p5-phone.js"></script>
p5.js has several existing commands that are specific to phone hardware. However, it requires extra steps to use them effectively. To use the Touch based commands, you need to disable the default gestures on your phone browser. For Motion and Microphone date you need to provide specific permissions for your browser to read the data.
touchStarted() - Called when a touch beginstouchEnded() - Called when a touch endsrotationX - Device tilt forward/backwardrotationY - Device tilt left/rightrotationZ - Device rotation around screenaccelerationX - Acceleration left/rightaccelerationY - Acceleration up/downaccelerationZ - Acceleration forward/backdeviceShaken() - Shake detection eventdeviceMoved() - Movement detection eventsetShakeThreshold() - Set shake detection sensitivitysetMoveThreshold() - Set movement detection sensitivityp5.AudioIn() - Audio input objectgetLevel() - Current audio input level<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Mobile p5.js App</title>
<style>
body {
margin: 0;
padding: 0;
overflow: hidden;
}
</style>
<script src="https://cdnjs.cloudflare.com/ajax/libs/p5.js/1.11.10/p5.min.js"></script>
<script src="https://cdn.jsdelivr.net/npm/p5-phone@1.6.4/dist/p5-phone.min.js"></script>
</head>
<body>
<script src="sketch.js"></script>
</body>
</html>
let mic;
function setup() {
// Show debug panel FIRST to catch setup errors
showDebug();
createCanvas(windowWidth, windowHeight);
// Lock mobile gestures to prevent browser interference
lockGestures();
// Enable motion sensors with tap-to-start
enableGyroTap('Tap to enable motion sensors');
// Enable microphone with tap-to-start
mic = new p5.AudioIn();
enableMicTap('Tap to enable microphone');
}
function draw() {
background(220);
// Always check status before using hardware features
if (window.sensorsEnabled) {
// Use device rotation and acceleration
fill(255, 0, 0);
circle(width/2 + rotationY * 5, height/2 + rotationX * 5, 50);
}
if (window.micEnabled) {
// Use microphone input
let level = mic.getLevel();
fill(0, 255, 0);
rect(10, 10, level * 200, 20);
}
}
// Prevent default touch behavior
function touchStarted() {
return false;
}
| Function | Description |
|---|---|
lockGestures() |
Prevent browser gestures (call in setup()). Blocks: pinch-to-zoom, pull-to-refresh, swipe navigation, long-press context menus, text selection, double-tap zoom |
| Function | Description |
|---|---|
enableGyroTap(message) |
Tap anywhere to enable motion sensors. Once enabled (when window.sensorsEnabled is true), provides access to: rotationX, rotationY, rotationZ, accelerationX, accelerationY, accelerationZ, deviceShaken, deviceMoved |
enableGyroButton(text) |
Button-based sensor activation. Once enabled (when window.sensorsEnabled is true), provides access to: rotationX, rotationY, rotationZ, accelerationX, accelerationY, accelerationZ, deviceShaken, deviceMoved |
| Function | Description |
|---|---|
enableMicTap(message) |
Tap anywhere to enable microphone (requires p5.sound library). Once enabled (when window.micEnabled is true), use with mic.getLevel() and other p5.AudioIn methods |
enableMicButton(text) |
Button-based microphone activation (requires p5.sound library). Once enabled (when window.micEnabled is true), use with mic.getLevel() and other p5.AudioIn methods |
| Function | Description |
|---|---|
enableSoundTap(message) |
Tap anywhere to enable sound playback (no microphone input) |
enableSoundButton(text) |
Button-based sound activation (no microphone input) |
| Function | Description |
|---|---|
enableSpeechTap(message) |
Enable speech recognition support (requires p5.js-speech library). Activates audio context without creating p5.AudioIn to avoid microphone hardware conflicts on mobile devices |
| Function | Description |
|---|---|
enableVibrationTap(message) |
Tap anywhere to enable vibration motor (Android only - not supported on iOS) |
enableVibrationButton(text) |
Button-based vibration activation (Android only - not supported on iOS) |
vibrate(pattern) |
Trigger vibration with duration (ms) or pattern array. Example: vibrate(50) or vibrate([100, 50, 100]) |
stopVibration() |
Stop any ongoing vibration |
| Function | Description |
|---|---|
createPhoneCamera(mode, mirror, displayMode) |
Create a camera optimized for phone use. Returns a PhoneCamera instance with automatic coordinate mapping for ML5 models (BodyPose, FaceMesh, HandPose). Parameters: mode ('user' for front camera, 'environment' for back camera), mirror (boolean, flip horizontally), displayMode ('fitHeight', 'fitWidth', 'cover', 'contain', or 'fixed'). Use with ML5 models for automatic coordinate mapping between video and canvas space. |
enableCameraTap(message) |
Tap anywhere to enable camera permissions. Automatically initializes all PhoneCamera instances. Required for iOS camera access. |
enableCameraButton(text) |
Button-based camera activation. Creates a button that enables camera permissions when clicked. |
| Method | Description |
|---|---|
cam.mapPoint(x, y) |
Map a simple point from video coordinates to canvas display coordinates. Handles scaling and mirroring automatically. Returns {x, y} object. Use for drawing custom points on top of scaled video. |
cam.mapKeypoint(keypoint) |
Map an ML5 keypoint object to display coordinates. Handles scaling and mirroring automatically. Preserves all keypoint properties (z, confidence, etc.). Returns mapped keypoint object. Use with ML5 BodyPose, FaceMesh, or HandPose keypoints. |
cam.mapKeypoints(keypoints) |
Map an array of ML5 keypoints to display coordinates. Handles scaling and mirroring automatically. Returns array of mapped keypoints. Use when processing multiple keypoints at once. |
cam.onReady(callback) |
Set a callback function to run when the video is fully ready for ML5 detection. Use this before initializing ML5 models to ensure the video element has loaded. Example: cam.onReady(() => { /* create ML5 model here */ }) |
cam.getDimensions() |
Get dimension information for the current display mode. Returns {x, y, width, height, scaleX, scaleY} object. Use for custom coordinate calculations or understanding the video layout. |
| Property | Description |
|---|---|
cam.videoElement |
Read-only. Returns the native HTML video element for use with ML5 libraries. Pass this to ML5 model constructors. |
cam.ready |
Read-only. Boolean indicating if camera is ready for use. |
cam.width |
Read-only. Current display width of the video on canvas. |
cam.height |
Read-only. Current display height of the video on canvas. |
cam.active |
Read-write. Camera facing mode: 'user' (front) or 'environment' (back). Changing this switches the camera. |
cam.mirror |
Read-write. Boolean to control horizontal mirroring of the video. |
cam.mode |
Read-write. Display mode: 'fitWidth', 'fitHeight', 'cover', 'contain', or 'fixed'. |
| Variable | Description |
|---|---|
window.sensorsEnabled |
Boolean: true when motion sensors are active |
window.micEnabled |
Boolean: true when microphone is active |
window.soundEnabled |
Boolean: true when sound output is active |
window.vibrationEnabled |
Boolean: true when vibration is available (Android only) |
| Function | Description |
|---|---|
showDebug() |
Show on-screen debug panel |
hideDebug() |
Hide debug panel |
toggleDebug() |
Toggle panel visibility |
debug(...args) |
Console.log with on-screen display |
debugError(...args) |
Display errors with red styling |
debugWarn(...args) |
Display warnings with yellow styling |
Basic setup from the README - a starting point for your own projects
Web Editor → View CodeTouch-to-talk speech recognition using Web Speech API
Launch Example →Body tracking with phone sensors and ml5.js BodyPose
Launch Example → View CodeFace tracking with phone sensors and ml5.js FaceMesh
Launch Example → View CodeHand tracking with phone sensors and ml5.js HandPose
Launch Example → View Code3D body tracking visualization with Three.js and ml5.js
Launch Example → View Code3D face tracking visualization with Three.js and ml5.js
Launch Example → View Code3D hand tracking visualization with Three.js and ml5.js
Launch Example → View CodeCompare clicking a button vs shaking your device to trigger actions
Launch Example → View CodeCompare button clicks vs device movement for interaction control
Launch Example → View CodeCompare button interface vs device orientation for control
Launch Example → View CodeRGB color control: traditional sliders vs device rotation
Launch Example → View CodeRGB color control: traditional sliders vs device acceleration
Launch Example → View CodeVolume control: traditional slider vs microphone input level
Launch Example → View CodeValue control: single slider vs multiple finger touches
Launch Example → View CodeValue control: traditional slider vs finger distance measurement
Launch Example → View CodeValue control: linear slider vs multi-touch angle detection
Launch Example → View Code