Upcoming Project Updates

Now that my spring break is underway, I will be updating this page with updates on my current projects from my previous post. Each project will get its own dedicated post in the following weeks, so stay tuned for that!

Guitar Pedal

I’ve completed a working prototype on the breadboard. It gives off a slightly overdriven tone, but turning the potentiometer on the amplifier stage can push it right into a nice, saturated fuzz tone. I have no direct tone shaping options on it, but the final volume adjustment has an interesting affect on filtered frequencies. I placed an electrolytic capacitor between the free leg of the potentiometer and ground, creating a filter. I haven’t yet analyzed what frequencies are affected yet, but I love the sound either way.
The prototype PCBs were a bust. I was fairly confident in the footprints I made in Altium, but I had some fundamental misconceptions about creating custom footprints for through-hole components. I will need to address these before I order new test boards.
I have a proof-of-concept pedal that I haphazardly threw together before the last career fair at Pitt. Aptly named, “Concept Fuzz”, it was a direct soldered-together version of what I had on the breadboard. Unfortunately, likely due to the hapazardness (?) , it was not functioning when I brought it in. The operations to save the circuit have failed, so it is now a paperweight to remind me to double-check layouts before soldering.

Clock/Decibel Meter

I have made some great progress on this one so far. The SN74HC595 shift register outputs work as expected, I can easily interface with my DS1307 RTC module through the I2C protocol, and I have started soldering together the final board for the clock portion of the project.
Once I get the package of INMP441 digital microphones, I will begin working on the decibel meter section of the project. I have spent many hours attempting to get my little electret microphones to work for this project with no luck. It would be easy to blame the microphones, the amplifiers, or my own equipment, but it is more than likely an issue with my understanding of the microphones. I have the restriction of using either 5V or 3.3V to power the microphones and get a rectified output, and this restriction has made it hard for me to build a working circuit. The INMP441 microphones are digital and operate on the I2C protocol as well, so I will get much more luck out of these.

New Project (yay!)

I’ve been speaking to some of my classmates, and we are going to work together to get this project working, each having a different idea of how the final product will look. I’ll describe my version here.

Overview:
An AI-powered personal assistant with wake word detection powered by ESP32. It will have access to ChatGPT-4 or ChatGPT-3.5 Turbo.

(Current) Detailed workflow:
Phase 1:
Using an external module that has built-in wake word detection, the system will wait for a name to be spoken.
When a name is spoken, the system will stream the incoming audio to a speech-to-text service.
Once the service responds with the output, the ESP32 will compile it into a formatted json file.

Phase2:
The ESP32 will send a formatted json file to one of OpenAI’s GPT APIs. The one chosen depends on the wake word that was detected earlier.
Most prompts should be sent to the cheaper model to operate. A portion of the prompt in the formatted json will be specifically for context and a summarized message history. When this context and history section reaches a specified token count, the cheapest GPT model will be called to summarize it down. The option will also be given to the default model to escalate a prompt to the more expensive model if the prompt requires more “thinking or logic”.
The GPT model should send a json back to the ESP32. Much later in this project’s life, I may have the model include specific instructions in the response. Instructions that could be detected by the ESP32 for other ESP32s to execute (this idea wouldn’t be considered for a while, but it sounds fun).

Phase 3:
Given a formatted json file, the ESP32 must parse through it to find a response from a GPT model.
Once found, the ESP32 will send the response to a text-to-speech service (likely from Google).
Any audio response given back to the ESP32 will be sent to an external module to play the digital audio sample back out to the user.

Phase 4:
Only after the previous phases are working exactly as intended, this phase consists of integration. I currently do not know what this will entail, as I have not worked on the phases yet. While this phase may sound simple to non-engineers, it will likely turn out to be a nightmare in practice.

Phase 5:
I am aware of modules that allow microcontrollers to connect to cellular networks. In fact, I picked up a SIM800L module to play around with. This module is way too weak for this project, but if I get comfortable with TinyGSM and AT commands, I may upgrade to a SIM7600 series module to add that functionality to my project.

Next
Next

List of My Current Projects - Feb 2025