A group of doctoral students from Bar-Ilan University’s Faculty of Engineering and Institute of Nanotechnology and Advanced Materials has discovered a way to improve the quality of light and sound on silicon chips — by slowing them down. Photonics and electronics combine on silicon chips to form integrated systems that allow for data processing and communication. However, while many people believe that speed is of the essence when it comes to technology, in the case of optical and electrical signals, the opposite can often prove to be true.
“Important signal processing tasks, such as the precise selection of frequency channels, require that data is delayed over time scales of tens of nano-seconds. Given the fast speed of light, optical waves propagate over many meters within these timeframes. One cannot accommodate such path lengths in a silicon chip. It is unrealistic. In this race, fast doesn’t necessarily win.”
Innovators have been dealing with this difficulty for roughly sixty years with analog electronic circuits. In that case, the solution proved to be acoustic-based. A specific signal is taken from the electrical domain and converted instead into an acoustic wave. This allows for the sound to be significantly slowed down, making it far easier for chips to accommodate the length of the path. Once propagation has been achieved, the signal can then be converted back to an electronic form with relative ease.
Now researchers believe that they’ve found a way to replicate this principle in regard to silicon-photonic circuits. Admittedly, there are some difficulties to be had — namely, the fact that the standard layer structure for silicon photonics is incapable of confining or guiding sound waves.
The answer, as it were, was found in illuminating metals. The signal is carried to a metal pattern on the chip by a beam of incoming light. This irradiates the metal pattern, causing it to expand and contract, which strains the silicon underneath. That strain could in theory drive surface acoustic waves, if properly formulated. The acoustic waves would then move across the standard optical waveguides found on the same chip. This would allow for the necessary delay required, as well as effectively convert the signal as needed.
Using this principle, the team has reached acoustic frequencies of up to 8 GHz; that being said, researchers believe their concept is scalable up to 100 GHz. Experts are continuing to explore this new principle, and how it might one day help to support the processing requirements of ultra-fast, high-powered 5G networks.
Leave a Reply