*Posted on behalf of Adam Vandenbussche, Dartmouth ’22*
My name is Adam and I’m a Dartmouth undergraduate researcher on the SPLICE project. I first became involved with SPLICE as a student in Professor Kotz’s COSC 89.26 SPLICE seminar course last fall. After spending the term reading and discussing papers considering a variety of security and privacy concerns in IoT, our culminating project was to conduct either a security or privacy analysis of an IoT device or to explore a topic of our choosing in an open-ended research project.
I’ve been curious to learn more about medical IoT, considering the particularly sensitive nature of the data this ecosystem produces and manages. For my project, I decided to analyze a Bluetooth-enabled device that, when paired with an accompanying smartphone app,* helps users monitor their medication adherence. To perform thorough testing of the device and app’s main functionalities, I used PCAP Remote and Android’s adb utility, two open-source packet sniffers, to capture network and Bluetooth packets, respectively. I then analyzed the intercepted data using Wireshark, a popular open-source packet analysis program.
I discovered a handful of mostly minor security and privacy vulnerabilities while analyzing the collected data, but one vulnerability particularly troubled me. Although the app’s API served most of its endpoints over the encrypted HTTPS protocol, it served two of them—the image upload and download endpoints—over the unencrypted HTTP protocol. The images transmitted over these endpoints could include user’s faces, such as for their profile picture, or medical information, such as images of documents discussing their medication. This lack of encryption to protect the transmission of highly sensitive information gravely threatened user privacy.
As a novice ethical hacker, I felt it important to alert the vendor of this vulnerability to avoid any further compromises of users’ privacy. I first informed the company over email, but much to my chagrin, my initial message—as well as the follow ups I sent 45 and 75 days later—went unanswered. Unfortunately, 90 days after my initial outreach I still had yet to hear from the company.
My next step was to inform the vendor in writing by mail. Despite sending a registered letter including a report detailing how to reproduce the issue and the post office confirming its delivery, I still received no response from the company.
My last resort was to report the vulnerability to the Cybersecurity and Infrastructure Security Agency (CISA), a branch of the Department of Homeland Security, and hope that they would have more luck getting through. Within a week of submitting my report to CISA, I heard back from the vendor who acknowledged the vulnerability and disabled the implicated features. A day later, I received confirmation from CISA that they had successfully contacted the vendor who patched the issue.
Overall, I was most impressed with CISA’s quick turnaround time and learned a lot about the responsible disclosure process through this experience. It feels good that my work through the SPLICE project has had a direct, positive impact—however small—on the security of a smart product.
* As the disclosure has not been publicized, I will refrain from identifying the vendor.