Did you know that music and diagnostic imaging have something in common? Sounds have a lower or higher pitch depending on the size of the object that creates them. Tubas and double basses are big and produce deep low-pitch sounds, while flutes and violins are small and produce high-pitched sounds. What’s interesting is that the same effect occurs when biological structures like cells or tissues emit sound – the pitch varies with size.
But what kind of sounds do biological structures make? Moreover, how can we listen to them?
Capitalizing on the correlation between size and pitch, a Ryerson-led research team working out of the Institute for Biomedical Engineering, Science & Technology (iBEST) at St. Michael’s Hospital recently developed a mode of imaging so novel that their study results were published in the Nature journal, Communication Physics.
An appreciation of this breakthrough begins with the basics of Photoacoustic (PA) imaging, a modality that is quickly gaining traction in biomedical research. Like its cousin Ultrasound (US) imaging, PA imaging creates a visual image of biological structures by collecting sound waves.
While US imaging technology involves sending soundwaves into a biological structure and listening to the echoes as they bounce around, PA imaging technology does something entirely different.
“With photoacoustic imaging, we project light into structures that will absorb it, such as blood vessels,” says Dr. Michael Kolios, the PA imaging pioneer who supervised the study. “Light waves cause biological structures to heat up by a tiny fraction, which triggers an almost imperceptible expansion in volume. When that happens, sound is generated, like thunder after a lightning strike.”
Most existing PA imaging techniques measure amplitude (loudness), displaying areas emitting louder sounds with brighter pixels. What the Ryerson-led team set out to develop was a technique that would measure the frequency (pitch) of sounds emitted from biological structures.
“Depending on the size of a biological structure, the pitch of the sound waves it emits will be higher or lower,” says Dr. Michael Moore, a Medical Physics Resident at Grand River Hospital in Kitchener who led the research team as a doctoral student under the supervision of Kolios. “If we could filter incoming sounds by frequency, we could create images that focus on structures of a particular size, which would help to reveal features that might otherwise be hidden or less prominent.”
The team developed a technique they call F-Mode (for frequency), which enabled them to subdivide PA signals into different frequency bands. They then successfully demonstrated selective enhancement of features of different sizes in samples ranging from biological cells to live zebrafish larvae – all without the use of contrast dyes that would typically be required by other state-of-the-art imaging techniques.
Moore and Kolios are quick to point out that a key to their success was the opportunity to work at iBEST and with Dr. Xiao-Yan Wen and his team at Zebrafish Centre for Advanced Drug Discovery. “Without the knowledge and expertise of the team at the Wen Lab, it would not have been possible to demonstrate that our technique works,” says Moore.
The research team, which includes Ryerson Biomedical Physics doctoral candidates Eno Hysi and Muhannad Fadhel, is now taking steps toward translating F-Mode into clinical applications, where it will be of widespread benefit. For example, the ability to segment and enhance features of different scales has significant potential in areas such as ophthalmology, neurosurgery and the detection of various conditions such as hypertension.
###
Disclaimer: AAAS and EurekAlert! are not responsible for the accuracy of news releases posted to EurekAlert! by contributing institutions or for the use of any information through the EurekAlert system.