A Hybrid-Integrated -/ -band Phased Array Module with Dual-Polarized Shared Aperture
IEEE Transactions on Microwave Theory and Techniques(2024)
Abstract
In this work, a passive dual-polarized shared antenna aperture and active beamformer front-ends are integrated into a phased array module. The system operates in the -/ -band and is intended for satellite communications. It consists of a hybrid assembly of aluminum waveguide components and printed circuit boards that are electrically connected by anisotropic conductive adhesive, containing a heterodyne transceiver with -band interfaces. For proof of concept, a demonstrator with 64 transmit (Tx) and 32 receive (Rx) elements is fabricated and characterized. It provides full-duplex Tx/Rx operation with bandwidths of 1 GHz centered around 29.5 (Tx) and 19.5 GHz (Rx), beam scanning over wide angular ranges up to 60 in both -and -planes, an equivalent isotropic radiated power of up to 20.4 dBW at 1-dB power compression and an Rx gain to noise temperature ratio of up to 10.2 dB/K at room temperature. The measured performances are comparable to those of phased array systems with separate apertures and thus validate the integration approach.
MoreTranslated text
Key words
Dual-frequency,dual-polarized,front-end,full duplex,integration,K-/K a-band,modular,phased array,planar,receive (Rx),satellite communication (SatCom),shared aperture,transceiver,transmit (Tx)
求助PDF
上传PDF
View via Publisher
AI Read Science
AI Summary
AI Summary is the key point extracted automatically understanding the full text of the paper, including the background, methods, results, conclusions, icons and other key content, so that you can get the outline of the paper at a glance.
Example
Background
Key content
Introduction
Methods
Results
Related work
Fund
Key content
- Pretraining has recently greatly promoted the development of natural language processing (NLP)
- We show that M6 outperforms the baselines in multimodal downstream tasks, and the large M6 with 10 parameters can reach a better performance
- We propose a method called M6 that is able to process information of multiple modalities and perform both single-modal and cross-modal understanding and generation
- The model is scaled to large model with 10 billion parameters with sophisticated deployment, and the 10 -parameter M6-large is the largest pretrained model in Chinese
- Experimental results show that our proposed M6 outperforms the baseline in a number of downstream tasks concerning both single modality and multiple modalities We will continue the pretraining of extremely large models by increasing data to explore the limit of its performance
Upload PDF to Generate Summary
Must-Reading Tree
Example

Generate MRT to find the research sequence of this paper
Related Papers
2007
被引用39 | 浏览
2019
被引用43 | 浏览
2020
被引用54 | 浏览
2021
被引用12 | 浏览
2021
被引用40 | 浏览
2020
被引用15 | 浏览
2022
被引用60 | 浏览
2022
被引用36 | 浏览
2022
被引用3 | 浏览
2023
被引用20 | 浏览
Data Disclaimer
The page data are from open Internet sources, cooperative publishers and automatic analysis results through AI technology. We do not make any commitments and guarantees for the validity, accuracy, correctness, reliability, completeness and timeliness of the page data. If you have any questions, please contact us by email: report@aminer.cn
Chat Paper
GPU is busy, summary generation fails
Rerequest