Showcase Projects2020-11-04T17:03:24-05:00

Showcase Project Information

Register Projects Here

Each year, the TechOlympics Showcase is a central part of our technology competition and event. Schools work on their Showcase projects with their coach during the November to February timeframe, then present their projects as a team in front of our panel of executive judges from all over Cincinnati’s business community. 

Create a project in one of these categories to earn points for your school in the 2021 TechOlympics Showcase. Sign up here. Use these as inspiration, but the sky’s the limit for what you can do. You will present this project to a panel of expert judges at Illuminate 2021 in February!

Come back to this page to submit your Showcase application before the deadline to ensure a spot in the conference schedule! If you have any questions, email [email protected].

Each school can submit an unlimited number of projects, so long as 5 representatives from the school will attend the conference to present each project.

Projects should be substantial. If a high level of effort is apparent when your project is being judged, your project will earn more points. Think outside the box, cover all your bases, and use a team to divide a huge project into small pieces so your school can succeed. The winning project of the TechOlympics showcase will have the opportunity to present their project in front of over 500 of their peers!

Showcase Rubric

Category Description
Creativity and Originality High: Surprisingly innovative. Clearly not a copy of anything that already exists.

Low: Not very original at all, an overused idea. Cliché, unimaginative.

Presentation High: All students showed terrific and professional stage presence, and present very well. Visuals and graphics look very professional.

Low: Poor public speaking skills shown. Visuals not effective in communicating the idea.

Real-World Impact High: Project provides a powerful, innovative solution to a real-world problem that could have impact on many people, if implemented widely.

Low: Project is shallow, in that, it may be interesting, but does not address a real-world issue.

Technical Difficulty

High: Project involves complex subject matter and uses sophisticated technology creatively to address the problems.

Low: Project addresses a relatively superficial problem and does not represent a technical challenge of substance.

Best Practices

  • While a working prototype is not required, it is highly encouraged. A working prototype will help to eliminate ambiguities and improve accuracy in interpretation of the project requirements and functionality.
  • Providing judges with a workflow/development process will give them a heightened sense of how your project came to fruition. This may include the: tasks and requirements which constitute a process & the people or resources required to deliver this process.
  • Similar to a workflow, a data/application flow will help judges follow the actual functionality and flow of information for your process or system. This is simply an overview that should be able to be elaborated upon.
  • In addition to your core project, it is advised to develop a business and marketing plan if applicable. This will convey your forward-looking approach and show a practical plan for advancing your idea.

Previous Winners

2020 First Place Winner

Project Name: Software Application for Deaf/Hearing Impaired to Experience Music for Better Quality of Life

Project Category: Stem Miscellaneous (robotics, hardware, etc.)

Project Description: A program that takes music input and translates the feeling and emotion expressed in the music into a visual display of colors that express the same emotions as the music. All achieved in real time with little to now latency. This project will help the deaf and hearing-impaired have an alternate way of appreciating music. The hope is to have the project be used in music therapy or other educational applications to help the deaf and hearing impaired appreciate music and have chance to experience music.

Creativity & Originality Comments: This project takes the existing capabilities of current music visualizers and increases those capabilities by creating visual effects that parallel the experiences we feel through our ears, all in real-time, with no setup required. Currently available music visualizers are unable to convey the music’s emotion; and more detailed displays, like LuminoCity, takes tremendous amounts of preparation ahead of the display. My project builds upon two aspects of a music visualizer: the music identification aspect and the color display aspect. For the music identification aspect, my program can not only identify the dynamics, pitch and articulation of a  song, but it can use this data to identify the exact chord being played and the time the chord is being played. This gives the program more data to use in order to create the display. For the color display aspect, my program does not just display color at random. Depending on the chord or note input, it will output a specific color that corresponds with the color a musician with synesthesia would see. This allows for the display to become more meaningful to the listener since the color is derived from the meanings and the emotion of the music. For example, I found through data analysis of songs that chords are a crucial factor for a song’s emotion. I created a way to take the chords from a song and used them to create a visual display, based on the chord-color wheel created by a musician with synesthesia. Additionally, my project can translate the music instantly while the music is playing. This allows users to experience the music in real time with normal hearing family and friends. 

Presentation Comments: I have practiced and shared my presentation with my CS teacher, Marcus Twyford. I have also discussed my project with music therapy professionals to ensure the viability of my project in real-world and learn the correct vocabulary necessary to describe the hearing impaired and deaf. 

Real World Impact Comments: This project gives the deaf and hearing impaired greater access to the experience of music. Those with hearing disabilities are limited in connecting with others. With nearly 1 million Americans functionally deaf and 10 million hard of hearing there are many who experience this disconnection. My project can give the hearing impaired greater social participation. It creates an intuitive connection between music and light. With this music interpretation program, those with hearing impairments can feel the music through different senses discerning the emotion of the music, even if they can’t hear. It allows them to connect with others via music. Socially they will be less likely to be left out of conversations with friends or family, therefore a better quality of life. This project can also be used with those with learning disabilities as music therapy. Those with learning disabilities struggle to connect with people. With this music interpretation program, they can experience music in a new way. This program will help them to see the emotions in the music more clearly and make better connections with their world and express their own emotions better.  

Technical Difficulty Comments: The tech I used was a machine-learning algorithm and VAMP plugins. The machine learning algorithms allowed me to find the music features that affect a song’s emotion; The VAMP plugins gave me the ability to extract certain aspects of music, such as chords and notes, to use with my program.  There were two main challenges I faced. My first challenge was with the program I was using to find the chord of the music, Chordino, export its output in such a way that my code could read it. Chordino is a Vamp plugin which only runs with programs such as Sonic Visualizer or Audacity.  I first had to find a way to have Chordino to run without these programs, then create a way for Chordino to output its values in a way I could use with my display portion. The second problem was having the colors change on the screen at the correct time. The initial problem was that the colors would stack on each other. Though all of them would display, that display happened all at the same time. At first, I added a delay, but that did not work. After some work, I found the problem had to do with the looping of the draw function in my program. After that, I had to get the chord to change on the timestamp; this was a bit trickier because it was basing the time passed of the internal time, which became less accurate after the tenths place. I had to round both the timer and the time stamp associated with each chord. With this, I was able to iterate through each line of the .txt file getting the timestamp and chord and having the function display at the correct times.

2020 2nd Place Winner

Project Name: Third Eye

Project Category: Stem Miscellaneous (robotics, hardware, etc.)

Project Description: Our showcase aims to fill a void in visually impaired navigation technology by creating a wearable, affordable headset that uses ultrasonic sensors to allow blind people to navigate their surroundings more easily. 

Our strategy was to focus on three main selling points that will make 3rd Eye better than all the rest, rather than try and make everything about 3rd Eye superior to the competitors. The three main points were: affordability, convenience, and aesthetics. All decisions were based off of improving 3rd Eye in one of these aspects. In terms of beating competitors in these three areas, we believe to have succeeded. (However, there is still much that we can improve upon in the future.)

The projected outcomes for 3rd Eye are the creation of a navigation device that places much less burden on the visually impaired in terms of both cost and functionality. We plan to keep working on 3rd Eye after Tech Olympics, and hope to eventually make it into a product that can actually help people who struggle with blindness in the real world.

Creativity & Originality Comments: In terms of innovation, there are extremely few devices that have full 360 degree sensing for blind people, so Third Eye is one of the few devices seeking to expand this area of products for the visually impaired. What sets Third Eye apart from the few 360 degree sensing products available are the focus points of Third Eye. All the sensors we’ve come across pay little to no attention to what the sensor looks like on the wearer, and cater to people who have the ability to shell out exorbitant amounts of cash for their product; They occupy the high end spectrum of the market. Our take on creating Third Eye started by trying to solve a range of problems rather than forcing a product on to the problems. The average blind person makes significantly less money than the average unimpaired person, however have a higher cost of living than an unimpaired person (in the form of more doctor’s visits, aide dogs/canes/devices, and caretakers). We determined Third Eye to be the most effective solution to said problem that is within our power to accomplish. It doesn’t completely solve the problem, however, so in the future, for the sake of trying to come closer to solving blind people’s economic disadvantage, our project may not just be limited to Third Eye, but other inventions as well, because we understand that Third Eye doesn’t completely solve the problem.

Presentation Comments: We plan to make videos that accurately demonstrate how 3rd Eye functions, and have been practicing/tweaking our presentation with one of our computer science teachers. We will have predetermined speaking roles, and will go through possible question scenarios so that we will be professional in answers. We have also already made a product timeline in which we explain how we have progressed from when we first began, to how the product will look further down the line, after Tech Olympics.

Real World Impact Comments: 3rd Eye seeks to fill avoid in the market of navigation devices meant for the visually impaired. After doing research on the topic, we found that most devices required the user to carry a baton or some sort of sensing device in the user’s hands, which can limit the user’s freedom of motion and ability to use both hands for other tasks. The few devices that were not required to be held were rather large, clunky, and not very aesthetically pleasing. (Blind people are like any other person, they want to look good in public). The available devices varied greatly in price (from a few hundred to multiple thousand dollars), but the lowest price we could find was $300, and that device was a potentially limiting handheld baton. We believe that 3rd Eye, our showcase project, would fill the void of a low cost, functional navigation device for blind people that does not restrict the wearer’s movement. It would place much less burden on the visually impaired in terms of cost and would increase their freedom to navigate safely and conveniently.

Technical Difficulty Comments: We used a variety of technologies to make our headset, and made most of our changes to satisfy our constraint of affordability. We 3D printed the frame of 3rd Eye, making it very cost effective, and rather durable. The tightening mechanism was taken from a construction helmet, and attached to our 3D printed frame. The actual sensing part of 3rd Eye consists of 24 ultrasonic sensors that send out many high pitched inaudible pulses per second. The time from when the pulses left the sensor and bounced back was recorded, and using that time, we were able to calculate the distance of an object from the wearer. We essentially simulated echolocation. The central brain for our sensor was a raspberry pi 3 B+ model, and all coding for 3rd Eye was done in python.

Our biggest challenge came with decreasing the size of 3rd Eye while also keeping all the functionality. We needed to make it smaller so that it doesn’t look too clunky, however, if we got the sensors to close to each other, they’d begin interfering with each other. There were also issues with wiring, since the smaller the headset got, the less room we had for wires, so we had to re-solder wires in different positions and combinations to get everything to fit inside the headset. We eventually solved the issue of getting sensors too close to each other by alternating between which sensors were activated, guaranteeing that one sensor doesn’t interfere with another sensor. The problem of 3rd Eye being too large was solved by simply testing and iteration. We designed a model, printed it, tested it, and made changes to it as needed. It wasn’t a particularly complex problem, but it was extremely time consuming.