Grad Portfolio

P5.js

As part of my deep dive into the forefront of creative technology, I set out to explore the latest innovations in frontend web development. Today’s creative technologists are pushing the boundaries of what’s possible by seamlessly integrating code with both software and hardware, fundamentally reimagining how we interact with computers. The interface between humans and machines is constantly evolving, opening new dimensions of interaction.

In this exploration, I delved into three fascinating topics utilizing P5.js (a javascript library): transforming a personal diary into a dynamic data chart, crafting immersive experiences with particle systems and orbiting characters, and experimenting with a microcontroller (Arduino) light sensor to create a symphony of sound and color.  

Interaction with computers has transcended the traditional boundaries of keyboards and mice. Static landing pages are a thing of the past. We are now at the cusp of a new era where technology compels us to innovate and evolve at breakneck speeds. This is a world where our gestures, our movements, even the ambient light in a room, can be harnessed to create new forms of expression and communication. My work is a testament to the exciting possibilities that lie ahead as we continue to blur the lines between the physical and the digital, between the artist and the machine.

 

AR Interaction with Generative AI

Augmented Reality (AR) is rapidly reshaping the landscape of human-computer interaction, blurring the lines between the physical and digital worlds. My journey into AR involved pushing the boundaries of what’s possible with the most cutting-edge platforms available today, including Niantic Lightship, Adobe Aero, Unreal, Snap Lens Studio, Meta Spark, and TikTok Effect House. By rigorously testing the limits of these offerings, I sought to understand and expand the potential of mixed reality experiences.

In this work, I explored the full spectrum of human-computer interaction, using the entire body as an interface for AR effects and storytelling. But it didn’t stop there—I also harnessed the power of generative AI to create a vast array of assets, from 2D images and animated videos to intricate 3D objects, all of which played pivotal roles in these projects. Generative AI is transforming the creative industries by enabling artists and technologists to focus more on concepts and ideas than the mechanics of delivery. The technology is evolving at an unprecedented pace, with text-to-video generation capabilities improving dramatically in just the past few months—surpassing even what was possible when I completed these projects six months ago.

This is an exhilarating time for small production houses, as the tools and techniques at our disposal continue to grow more powerful and accessible. The future of AR lies in its ability to transform storytelling and interaction, making them more visceral, more engaging, and more human. As we continue to explore and innovate within this space, AR will become an integral part of how we communicate, create, and connect. My work is a testament to the potential of AR and generative AI to revolutionize our world, offering glimpses into a future where the boundaries between the virtual and the real are fluid and ever-changing.

 

Portable AI with personalities

I have been working with NLP since 2018 – primarily understanding sentiment and context of massive transcript or grant text-based libraries. In my grad work, I took these basics and started work with LLMs.

The future of AI is leaning towards energy efficiency, emulating human behaviors, and in some very early use cases – portability; with the emergence of eLMs (Efficient Language Models) leading the way. These models are designed to reduce the size of AI parameters, making them compact enough to operate on smaller devices.

A team of us set out to answer a challenging question: Can we run a fully functional AI on a constrained device (in our case a Raspberry Pi CPU), complete with a distinct personality, without relying on an internet connection? We successfully accomplished this, even with the current hardware that wasn’t originally designed for such tasks.

Given the time constraints of school, we kept it simple by making our central character a “pet rock” that did more than just sit there, but would interact with you. By combining low token language models with computer vision, we were able to upload an image into the AI, which then described what it saw with a “touch” of personality—ranging from angry to calm or neutral tones. The downside is the lag or length of time between an image input to the text output. The day we finished we already saw hardware making a play at this type of application.

There’s a growing need for more research in AI on edge devices, especially as security concerns continue to rise in the realm of IoT. Efficient, small-scale AI could offer a viable solution to these challenges, paving the way for safer and more autonomous systems.

Github: https://github.com/D-Vaillant/petrock

Machine Learning & Data Science

There is untold amounts of data available for analysis to identify trends in human behavior as well as scripted computer behaviors. I laid a foundation by studying data science and incorporated it into my machine learning course that taught the fundamentals of Artificial Intelligence (AI). 

I evaluated multiple machine learning models ability to analyze traffic, demographic, and income data to detect which groups would theoretically be most impacted by NYC’s new congestion pricing tolls. 

These data sets already existed. The next phase of cities should be to connect this data to machine learning models to predict

Deep Learning

I also took a deep dive in implementing various machine learning models and exploring the mathematical black box of AI. This included using computer vision to understand pixels on maps to predict the trajectory of static position objects obscured by other objects (“sidewalks covered by trees”) 

Immersive UX Design

Creating a great experience “starts with a story”. Literally every immersive designer says this. To support stories told in new ways technology tools continue to become more efficient and less cost prohibitive. 

This upcoming semester I am honing in on a unique experience utilizing all the tools I have learned and merging it with Human Centered Interaction, Audio for immersive games, light and sensor creative engineering.

More to come at the send of 2024…