Technical Seminar Series
Would you like to hear a Lincoln Laboratory talk?
We are pleased to present technical seminars to interested college and university groups. To arrange a technical seminar, please call (781) 981-2465 or email [email protected] and provide a rank-ordered list of requested seminars, your preferred date/time options, and a description of the target audience.
We offer the seminars in the following areas: air traffic control; biotechnology and human systems; communication systems; cyber security and information sciences; engineering; homeland protection; human language technology; machine learning and artificial intelligence; radar and signal processing; advanced technology; and space systems technology. Please find related seminars and abstracts below.
Air Traffic Control
Technical seminars offered in air traffic control.
Advanced Air Mobility Environmental and Airspace Impacts Modeling
Luis E. Alvarez1
MIT Lincoln Laboratory
There is a growing vision for Urban Air Mobility (UAM) that industry and government agencies are actively pursuing. This vision has unmanned and manned aircraft operating at low altitudes (below 5,000 feet) to transport people and goods in urban environments. Market studies believe by 2040, the U.S. national airspace could see as many as 500 million package deliveries and 740 million air-metro passengers. To produce the necessary regulations and standards for vehicles and airspace, civil aviation authorities must understand the scope of UAM operations and their potential impact on existing airspace operations and the local population. This seminar will discuss the key challenges UAM must address to become a reality and the modeling and simulations capabilities that are under development to assess this transportation mode. In particular, this seminar will walk through an example of how UAM operations can be designed, how weather might impact operations, how UAM compares to ground transportation, and what the associated data requirements are.
1 MS, Aerospace and Astronautics, Massachusetts Institute of Technology
Next-Generation Weather Radar: The Multifunction Phased Array Radar Advanced Technology Demonstrator
Christine F. Parry1
MIT Lincoln Laboratory
Since 2007, Lincoln Laboratory’s Multifunction Phased Array Radar (MPAR) program has been focused on the research and development of low-cost phased array technology that can support multiple mission areas. The MPAR Advanced Technology Demonstrator (ATD) is the risk-reduction prototype that was installed at the National Severe Storms Laboratory (NSSL) in Norman, Oklahoma, in 2018, where it is currently used for storm detection and early storm warnings. Renewed focus on applications to other mission areas such as terminal and en route air traffic control sensing will be investigated after the Initial Operational Capability is complete in 2021. This briefing will discuss the objectives and technical challenges of building, installing, and calibrating the MPAR ATD; the specific needs of weather radars and how MPAR addresses them; and the role of MPAR in the future of weather forecasting. Preliminary results of storm season data collection will also be shown and discussed.
1 MS, Computer Information Systems/Networking, Carnegie Mellon University
Radar Detection of Aviation Weather Hazards
Dr. John Y. N. Cho1
MIT Lincoln Laboratory
Bad weather plays a factor in many aviation accidents and incidents. Microburst, hail, icing, lightning, fog, and turbulence are atmospheric phenomena that can interfere with aircraft performance and a pilot's ability to fly safely. For safe and efficient operation of the air traffic system, it is crucial to continuously observe meteorological conditions and accurately characterize phenomena hazardous to aircraft. Radar is the most important weather-sensing instrument for aviation. This seminar will discuss technical advances that led to today's operational terminal wind-shear detection radars. An overview of recent and ongoing research to improve radar capability to accurately observe weather hazards to aviation will also be presented.
1 PhD, Electrical Engineering, Cornell University
System Design in an Uncertain World: Decision Support for Mitigating Thunderstorm Impacts on Air Traffic
Richard A. DeLaura1
MIT Lincoln Laboratory
Weather accounts for 70% of the cost of air traffic delays—about $28 billion annually—within the United States National Airspace System. Most weather-related delays occur during the summer months, when thunderstorms affect air traffic, particularly in the crowded Northeast. The task of air traffic management, complicated even in the best of circumstances, can become overwhelmingly complex as air traffic managers struggle to route traffic reliably through rapidly evolving thunderstorms. A new generation of air traffic management decision support tools promises to reduce air traffic delays by accounting for the potential effects of convective weather, such as thunderstorms, on air traffic flow. Underpinning these tools are models that translate high-resolution convective weather forecasts into estimates of impact on aviation operations.
This seminar will present the results of new research to develop models of pilot decision making and air traffic capacity in the presence of thunderstorms. The models will be described, initial validation will be presented, and sources of error and uncertainty will be discussed. Finally, some applications of these models and directions for future research will be briefly described.
1 AB, Physics, Harvard University
Biotechnology and Human Systems
Technical seminars offered in biotechnology and human systems.
Advanced Remote Sensing for Disaster Response
Chad Council1 and Daniel Dumanis2
MIT Lincoln Laboratory
The United States experiences more natural disasters than any other country in the world and spends more money on disaster response and recovery than any other nation. In the last 10 years alone, the United States has spent over $800,000,000,000 in disaster response and recovery activities, and there is no indication that the disaster frequency or severity will meaningfully decline in the coming decades.
At the core of disaster support is supplying concise and comprehensive spatial and temporal information to emergency managers and decision makers. Remote sensing has the ability to meet or exceed many of the fundamental information needs to successfully mount and execute a faster and lower-cost response to disaster and related humanitarian crises. This talk will highlight novel remote sensing technologies, their applications in disaster response, and the necessary tools to integrate these new and emerging capabilities into everyday responses. It will highlight several disaster response campaigns that have used advanced technologies and analysis techniques to showcase their benefit and to define the potential future of emergency response.
1 MS, Emergency Management, Massachusetts Maritime Academy
2 BE, Computer Engineering, Northeastern University
Biological Agent Detection, Identification, and Response
Dr. Meghan E. Ramsey1
MIT Lincoln Laboratory
Bioterrorism represents a significant threat to both military and civilian settings. As the ongoing COVID-19 pandemic has brought into sharp relief, biological threats extend beyond intentional attacks to include naturally occurring infectious diseases, which can have enormous impacts on the economy and human health. Rapid, high-confidence detection of biological threats is essential to initiate a timely response, deploy medical countermeasures, and save lives. This seminar will discuss currently available biosensor technologies, inherent challenges associated with environmental detection of biological threats, and future trends in biosensor and medical countermeasure development.
1 PhD, Microbiology, University of Wisconsin–Madison
Complex DNA Mixture Analysis: Massively Parallel Sequencing (MPS) of Rare Single Nucleotide Polymorphisms (SNPs)
Dr. Martha S. Petrovick1, Dr. Darrell O. Ricke2, Dr. Eric D. Schwoebel3, and Dr. Joshua Dettman4
MIT Lincoln Laboratory
DNA mixtures from three or more contributors have proven difficult to analyze using the current state-of-the-art method of short-tandem repeat (STR) amplification followed by capillary electrophoresis (CE). This difficulty is due to multiple issues, including the sharing of alleles between different contributors, the production of confounding stutter products during amplification, and the small amount of DNA recovered from some contributor which causes allele dropout. The discriminatory power of STRs lies in the fact that they are highly polymorphic, even though the individual alleles are not necessarily rare. This feature has enabled forensic scientists to streamline their STR analysis workflow to tens of loci in the genome. In contrast, there are only two alleles at each SNP locus used in this study, and the presence of each minor allele in an individual is relatively uncommon. Therefore, more SNP loci are required to produce a unique individual signature, and many more SNP loci are required for successful analysis of complex mixtures.
SNP loci were selected on the basis of low fixation index (FST) values and low global minor allele frequencies. The panel to be used for mixture analysis was selected from the larger primer panel on the basis of specific criteria (FST ≤ 0.08, minor allele frequency ≤ 0.3, number of reads within a 10-fold range, strand bias ratio > 0.5, minor allele ratio for homozygous major loci ≤ 0.005, no Mendelian errors, distance of ≥ 500,000 base pairs between loci). Primers meeting these criteria created a mixture panel targeting 2,655 loci, 2,311 of which are used for mixture analysis, while an additional 344 loci were included for identification and biogeographic ancestry prediction. Bioinformatic analyses of SNP data is accomplished using a custom software platform, IdPrism, which was developed with a simple user interface and is composed of modules that identify sequence variants (minor alleles); perform identification and mixture analysis; and determine familial relationships, phenotypes, and bio-geographic ancestry. Using this new approach, Lincoln Laboratory is able to resolve up to 10 person mixtures with as low as 1 ng total input DNA.
1 PhD, Cell and Developmental Biology, Harvard University
2 PhD, Molecular Biology, Mayo Graduate School
3 PhD, Cell Biology, Baylor College of Medicine
4 PhD Chemistry, The Ohio State University
Developing a Model-Based Systems Engineering Architecture for Defense Wearable Technology
Jillian Cyr1 and Tara Sarathi2
MIT Lincoln Laboratory
The rapid advancement of commercial wearable sensing technologies provides an unprecedented opportunity to gather information that improves warfighter performance during military activities and to detect the onset of illnesses (such as COVID-19) through surveillance. However, the promise of improved performance and illness prevention through these technologies remains unfulfilled because of the complexity of guaranteeing that technology development outside of the standard military acquisition cycle will meet military requirements. The key to meeting this challenge is to facilitate coordination among R&D efforts, commercially developed products, and military acquisition strategies. To address this, we developed a model-based systems engineering architecture and methodology for validating independently developed wearable system designs against military end-user needs. This methodology includes developing a conceptual framework, a model library, and a capability needs matrix that maps defense mission characteristics to physiological states and product design implementations. This architecture allows military stakeholders to determine where capability gaps or opportunities for wider application of commercial technologies exist, thus providing a bridge between externally developed wearable sensing technologies and military acquisition strategies.
1 MS, Aeronautical and Astronautical Engineering, University of Southern California
2 MS, Materials Science and Engineering, Massachusetts Institute of Technology
Human-Systems Integration in Complex Support Systems
Dr. Hayley J. Davison Reynolds1
MIT Lincoln Laboratory
Lincoln Laboratory has had a successful history of integrating decision support systems into the air traffic control and emergency management domain, even though the introduction of new technologies often meets user resistance. Because of the complex nature of the work, air traffic controllers and emergency managers heavily rely on certain technologies and decision processes to maintain a safe and effective operating environment. This reliance on familiar technology and processes makes the introduction of a new tool into the environment a difficult task, even though the tool may ultimately improve the decision and raise levels of safety and/or efficiency of the operation. Poorly integrated systems can result in users being disappointed by a system that provides information for which they have no concept of use, or more likely, that results in the system's not being used despite its existence in the array of available information systems; neither option yields the benefits for which the tool was designed.
In this seminar, a practical methodology for designing and fielding decision support systems that maximizes the potential of effective integration of the system into the users' operational context will be presented. Several examples of air traffic control and emergency management decision support prototype systems designed by Lincoln Laboratory, including the Route Availability Planning Tool, the Tower Flight Data Manager, and the Hurricane Evacuation decision support tool, will be described to demonstrate this process. Included in the presentation will be areas in which the designers ran into roadblocks in making the systems effective and the combination of qualitative and quantitative techniques used to help properly integrate the system in the field, yielding measurable operational benefits.
1 PhD, Aeronautical Systems and Applied Psychology, Massachusetts Institute of Technology
Introduction to Serious Games
Amanda M. Casale1
MIT Lincoln Laboratory
Serious games involve the traditional aspects of game play—objectives, rules (e.g., allowable actions), and feedback—but are applied for nonentertainment purposes. Serious games are the latest evolution in tabletop exercises (TTXs). They allow designers to create a wide variety of true-to-life scenarios with complex variables and enable participants to repeatedly drill on the same scenario or test their abilities using a wide variety of simple to challenging situations. The serious games platform has many advantages over TTXs. First, participants can play serious games remotely, which allows engagement with more partners (including those in less accessible areas). Second, the games tend to be short, accessible, and flexible, enabling many exercises, participants, and scenarios (increasing the amount of data collected for analysis and metrics development). And third, the nature in which responses are recorded also increases the amount of objectivity and collaboration compared to a traditional TTX. Lincoln Laboratory has successfully developed specialized serious gaming and training tools to address several Department of Homeland Security and Department of Defense problems. This talk will present an overview of these various efforts and will also provide general insight into serious gaming and the serious gaming methodology.
1 MS, Statistics, Harvard University
Machine-Aided Human Performance Enhancement
Dr. Ryan J. McKindles1, Dr. Ho Chit Siu2, and Dr. Christopher J. Smalt3
MIT Lincoln Laboratory
The integration of wearable technologies with novel, intelligent algorithms can enhance the physical and cognitive abilities of human operators. This presentation will highlight two unique technologies currently under development at Lincoln Laboratory that augment human performance. First, we will discuss the introduction of adaptive attention decoding algorithms with hearables to enhance the listener’s experience in noisy and complex auditory environments. Second, we will show initial advancements in the field of human-exoskeleton teaming with a focus on operationalizing the technology for real-world environments. As part of this presentation, we will discuss supervised and unsupervised machine learning approaches on physiological measurements such as electroencephalography, electromyography, and motion-capture data. These emerging technologies, which can sense the intent of the user, can be adaptive and have the potential to enhance human performance and aid in recovery after injury.
1 PhD, Biomedical Engineering, Marquette University
2 PhD, Aeronautics and Astronautics, Massachusetts Institute of Technology
3 PhD, Electrical Engineering, Purdue University
On-the-Move Monitoring of Human Health and Performance
Dr. Paula Pomianowski Collins1 and Dr. Brian A. Telfer2
MIT Lincoln Laboratory
With the proliferation of commercial wearable devices, we are now able to obtain unprecedented insight into the ever-changing physical state of our bodies. These devices allow real-time monitoring of biosignals that can generate actionable information to enable optimized interventions to avoid injury and enhance performance. Combat and medical planners across all the military services are keenly interested in harnessing wearable sensor advances to diagnose, predict, and improve warfighters’ health and performance. However, moving from civilian promise to military reality is a complex process, with unique requirements of hardware design, real-time networking, data management, cybersecurity, predictive model building, and decision science. Emerging technologies for military on-the-move monitoring will be highlighted, along with a discussion of an integrated open systems architecture approach for functional evolution.
1 PhD, Physics, University of Pittsburgh
2 PhD, Electrical Engineering, Carnegie Mellon University
Communication Systems
Technical seminars offered in communication systems.
Diversity in Air-to-Ground Lasercom: The FOCAL Demonstration
Robert J. Murphy1
MIT Lincoln Laboratory
Laser communications (lasercom) provides significant advantages, compared to RF communications, which include a large, unregulated bandwidth and high beam directionality for free-space links. These advantages provide capabilities for high (multi-gigabits per second, or multi-Gb/s) data transfer rates; reduced terminal size, weight, and power; and a degree of physical link security against out-of-beam interferers or detectors. This seminar will address the key components of lasercom system design, including modeling and simulation of atmospheric effects, link budget development, employment of spatial and temporal diversity techniques to mitigate signal fading due to scintillation, and requirements for acquisition and tracking system performance. Results from recent flight demonstrations show stable tracking, rapid reacquisition after cloud blockages, and high data throughput for a multi-Gb/s communications link out to 80 kilometers. Potential technologies for further development include a compact optical gimbal that has low size, weight, and power, and more efficient modem and coding techniques to extend range and/or data rate.
1 BS, Electrical Engineering, University of Massachusetts Lowell
Dynamic Link Adaptation for Satellite Communications
Dr. Huan Yao1
MIT Lincoln Laboratory
Future protected military satellite communications will continue to use high-transmission frequencies to capitalize on large amounts of available bandwidth. However, the data flowing through these satellites will transition from the circuit-switched traffic of today's satellite systems to Internet-like packet traffic. One of the main differences in migrating to packet-switched communications is that the traffic will become bursty (i.e., the data rate from particular users will not be constant). The variation in data rate is only one of the potential system variations. At the frequencies of interest, rain and other weather phenomena can introduce significant path attenuation for relatively short time periods. Current protected satellite communications systems are designed with sufficient link margins to provide a desired availability under such degraded path conditions. These systems do not have provisions to use the excess link margins for additional capacity when weather conditions are good. The focus of this seminar is the design of a future satellite system that autonomously reacts to changes in link conditions and offered traffic. This automatic adaptation drastically improves the overall system capacity and the service that can be provided to ground terminals.
1 PhD, Electrical Engineering, Massachusetts Institute of Technology
Laser Communications to Enable Space Exploration
Dr. Farzana I. Khatri1
MIT Lincoln Laboratory
Traditionally, communication between the Earth and space has relied on RF systems, which have been in use since the Apollo era when sensor technology was primitive and the internet did not exist. Today, commercial companies have deployed satellites with sensors that generate many terabytes of data each day, but only a fraction of this data is transmitted back to the Earth because of communications bandwidth constraints. Furthermore, as humans venture deeper into space, higher communications bandwidth will be required to support them. Free-space laser communications, or lasercom, offers a high-bandwidth and low-size, -weight, and -power solution to the space bandwidth bottleneck by using the same lasers and techniques that revolutionized fiber-optic communications in the 1990s. This talk will describe Lincoln Laboratory's current efforts in designing and deploying lasercom systems with NASA, the current efforts to develop a strong industry base in lasercom, and upcoming lasercom missions.
1 PhD, Electrical Engineering, Massachusetts Institute of Technology
QLIMM: A Scalable and Robust Network Protocol for Large Swarms of Small UAVs
Dr. Thomas B. Stahlbuhk1
MIT Lincoln Laboratory
In this seminar, we will detail a synchronized frequency hopping network for swarms of small unmanned aerial vehicles (UAVs) operating in unlicensed frequency bands over large geographical ranges. The core component of our design is Queue Length Informed Maximal Matching (QLIMM), a distributed transmission scheduling protocol that exchanges queue state information between nodes to assign subdivisions of the swarm to orthogonal hopping patterns in response to the network's throughput demands. QLIMM efficiently allocates channel resources across large networks without relying on any centralized control or preplanned traffic patterns, which is in the spirit of a swarming capability. However, given that control overhead must scale up with the swarm's size and the highly-congested nature of the unlicensed bands, especially at advantaged altitudes, fragility could be a concern. Through extensive simulations, using measurements of background interference in the 2.4- and 5-gigahertz bands in both urban and rural settings, QLIMM is demonstrated to outperform traditional channel access control schemes and other scheduling protocols that do not pass queue state information.
1 PhD, Aerospace/Astronomics, Massachusetts Institute of Technology
Undersea Laser Communication — The Next Frontier
Dr. Hemonth Rao1
MIT Lincoln Laboratory
Undersea communication is of critical importance to many research and military applications, including oceanographic data collection, pollution monitoring, offshore exploration, and tactical surveillance. Undersea communication functions have traditionally been accomplished with acoustic and low-frequency radio communication. These technologies, however, impose speed limitations on the undersea platform, impose maximum communications data rates limited to a few bits per second (low-frequency radio) or multiple kilobits per second (acoustic), and radiate over a broad area. These factors lead to power inefficiencies, potential crosstalk, and multipath degradation. Undersea optical communication has the potential to mitigate these capability shortfalls, providing very-high-data-rate covert communication.
This seminar will provide an overview of applications that might benefit from high-rate undersea communication and technologies and will report on new research into finding the ultimate performance limit of undersea optical communication. Using sophisticated physics models of laser and sunlight undersea propagation, Lincoln Laboratory researchers have found that gigabit-per-second links over appreciable ranges, in excess of 100 meters, are possible in many of the world's oceans. This capability promises to transform undersea communications. Finally, the seminar will present architectures based on currently realizable technology that can achieve the predicted ultimate performance limit.
1 PhD, Electrical Engineering, University of California, San Diego
Waveform Design for Airborne Networks
Dr. Frederick J. Block1
MIT Lincoln Laboratory
Airborne networks are an integral part of modern network-centric military operations. These networks operate in a unique environment that poses many research challenges. For example, nodes are much more mobile in airborne networks than in ground-based networks and are often separated by great distances that cause long delays and large propagation losses. Additionally, the altitude of an airborne receiver potentially places it within line of sight of many transmitters located over a wide area. Some of these transmitters may be part of the same network as the receiver but may cause multiple-access interference that is due to complications in channel access protocol design for airborne networks. Other transmitters that are outside the network may also cause interference. The waveform design must specifically account for this environment to achieve the required level of performance. Physical, link, and network layer techniques for providing high-rate, reliable airborne networks will be discussed.
1 PhD, Electrical Engineering, Clemson University
Cyber Security and Information Sciences
Technical seminars offered in cyber security and information sciences.
Addressing the Challenges of Big Data Through Innovative Technologies
Dr. Vijay Gadepally1 and Dr. Jeremy Kepner2
MIT Lincoln Laboratory
The ability to collect and analyze large amounts of data is increasingly important within the scientific community. The growing gap between the volume, velocity, and variety of data available and users' ability to handle this deluge calls for innovative tools to address the challenges imposed by what has become known as big data. Lincoln Laboratory is taking a leading role in developing a set of tools to help solve the problems inherent in big data.
Big data's volume stresses the storage, memory, and compute capacity of a computing system and requires access to a computing cloud. Choosing the right cloud is problem specific. Currently, four multibillion-dollar ecosystems dominate the cloud computing environment: enterprise clouds, big data clouds, SQL database clouds, and supercomputing clouds. Each cloud ecosystem has its own hardware, software, communities, and business markets. The broad nature of big data challenges makes it unlikely that one cloud ecosystem can satisfy all needs, and solutions are likely to require the tools and techniques from more than one cloud ecosystem. The MIT SuperCloud was developed to provide one such solution. To our knowledge, the MIT SuperCloud is the only deployed cloud system that allows all four ecosystems to coexist without sacrificing performance or functionality.
The velocity of big data stresses the rate at which data can be absorbed and meaningful answers produced. Through an initiative led by the National Security Agency (NSA), the Common Big Data Architecture (CBDA) was developed for the U.S. government. The CBDA is based on the Google Big Table NoSQL approach and is now in wide use. Lincoln Laboratory was instrumental in the development of the CBDA and is a leader in adapting the CBDA to a variety of big data challenges. The centerpieces of the CBDA are the NSA-developed Apache Accumulo database (capable of millions of entries per second) and the Lincoln Laboratory–developed Dynamic Distributed Dimensional Data Model (D4M) schema.
Finally, big data variety may present both the largest challenge and the greatest set of opportunities for supercomputing. The promise of big data is the ability to correlate heterogeneous data to generate new insights. The combination of Apache Accumulo and D4M technologies allows vast quantities of highly diverse data (bioinformatics, cyber-relevant data, social media data, etc.) to be automatically ingested into a common schema that enables rapid query and correlation of elements.
1 PhD, Electrical and Computer Engineering, The Ohio State University
2 PhD, Physics, Princeton University
CASCADE: Cyber Adversarial Scenario Modeling and Automated Decision Engine
Trang Nguyen1
MIT Lincoln Laboratory
Cyber systems are complex and include numerous heterogeneous and interdependent entities and actors—such as mission operators, network users, attackers, and defenders—that interact within a network environment. Decisions must be made to secure cyber systems, mitigate threats, and minimize mission impact. Administrators must consider how to optimally partition a network using firewalls, filters, and other mechanisms to restrict the movement of an attacker who gains a foothold on the network and to mitigate the damage that the attacker can cause. Current cyber decision support focuses on gathering network data, processing these data into knowledge, and visualizing the resultant knowledge for cyber situational awareness. However, as the data become more readily obtained, network administrators face the same problem that business analysts have faced for some time. More data or knowledge does not lead to better decisions. There is a need for the development of cyber decision support that can automatically recommend timely and sound data-driven courses of action. Currently, decisions about partitioning a network are made by analysts who use subjective judgment that is sometimes informed by best practices. We propose a decision system that automatically recommends an optimal network-partitioning architecture to maximize security posture and minimize mission impact.
1 MS, Computer Science, Columbus State University
Cryptographically Secure Computation
Dr. Emily Shen1
MIT Lincoln Laboratory
For many national security applications, such as cyber defense, humanitarian target deconfliction, and satellite conjunction analysis, multiple parties want to collaborate to gain insights from their collective data but do not want to share their own data because of security and privacy concerns. Current approaches to collaboration require that the parties trust each other with their sensitive data or use a mutually trusted third party; these approaches are often undesirable and impractical or impossible.
Secure multiparty computation (MPC) is a type of cryptography that allows parties to jointly compute on their data without giving their data to each other and without using a trusted party. While MPC techniques exist that can in theory compute any function securely, in practice it is challenging to develop and optimize MPC for complex applications. To address this challenge, Lincoln Laboratory has developed a framework for rapidly prototyping and evaluating MPC solutions. This talk will describe MPC technology, the framework developed by the Laboratory, and prototypes built using the framework for a variety of applications.
1 PhD, Electrical Engineering and Computer Science, Massachusetts Institute of Technology
Multicore Programming in pMatlab® Using Distributed Arrays
Dr. Jeremy Kepner1
MIT Lincoln Laboratory
MATLAB is one of the most commonly used languages for scientific computing, with approximately one million users worldwide. Many of the programs written in MATLAB can benefit from the increased performance offered by multicore processors and parallel computing clusters. The Lincoln Laboratory pMatlab library allows high-performance parallel programs to be written quickly by using the distributed arrays programming paradigm. This talk provides an introduction to distributed arrays programming and will describe the best programming practices for using distributed arrays to produce programs that perform well on multicore processors and parallel computing clusters. These practices include understanding the concepts of parallel concurrency versus parallel data locality, using Amdahl's Law, and employing a well-defined design-code-debug-test process for parallel codes.
1 PhD, Physics, Princeton University
Operationalizing Artificial Intelligence in Cyber Security
Dr. Dennis Ross1
MIT Lincoln Laboratory
Cyber security professionals work in a domain with a constant coevolution of adversary tactics and defensive measures. Both of these activities are represented in large datasets collected from multiple sensors. Cyber operators are overwhelmed by the volume and velocity of these data streams and often lack the tools to properly apply their analyses in a mission context. Both offensive and defensive capabilities require building situational awareness and analytical tools on large, unstructured, and multisource data collections. Fortunately, artificial intelligence (AI) techniques can provide significant capabilities to rapidly analyze large datasets and automate several common cyber activities, such as event generation, network characterization, detection of new adversarial behavior, data curation, and recommendations for cyber actions. These technologies are then paired with frameworks to deliver AI to the operational community through novel UI/UX development and principles of explainable AI. This talk will focus on the cyber operational activities that are most amenable to AI with examples drawn from recent technology development and a discussion of when not to apply AI to cyber operations.
1 PhD, Computer Science and Engineering, Michigan State University
Persona Linking Tools for Large-Scale Cyber Entity Attribution
Dr. Lin Li1 and Dr. Charlie Dagli2
MIT Lincoln Laboratory
The challenge of associating digital personas across multiple information domains is a key problem in social media understanding and cyber entity attribution. Successful persona linking provides integration of information from multiple information sources to create a complete picture of user and community activities, characteristics, and trends. Lincoln Laboratory has developed a suite of high-performance digital persona linking tools using advanced natural language processing and graph analytics. These tools aim to uncover hidden identities and covert network activities across multiple information domains, e.g., social media, online forums, deep and dark web marketplaces, and Department of Defense and law enforcement–specific data. The tools are designed to operate on large-scale data collections and exploit multiple aspects of persona signatures. Using multiple dimensions of persona matching, the suite of tools has demonstrated state-of-the-art performance on both social media and illicit marketplace persona-linking tasks. In this seminar, we will highlight our recent advancements in persona linking, including methodology, applications, performance, and future trends and needs.
1 PhD, Electrical and Computer Engineering, University of California, Davis
2 PhD, Electrical and Computer Engineering, University of Illinois at Urbana-Champaign
Real-Time Security
Dr. Bryan Ward1 and Dr. Nathan H. Burow2
MIT Lincoln Laboratory
Real-time systems (RTS) underlie much of our modern infrastructure, from power grids to cars to the industrial base that produces many consumer goods. Attacks against such systems, as exemplified by the 2015 attacks by Russia against the Ukrainian power grid and the 2017 Triton attack against safety systems in an industrial plant, are becoming increasingly common. Defending RTS is significantly different than securing a traditional enterprise computing system because RTS inherently have timing constraints. Consequently, performance of the hardened system is not a user-experience or efficiency-of-servers question, but is critical to the correct, and safe, functioning of the system.
At Lincoln Laboratory, we have been pursuing a suite of technologies to help harden RTS, including a new evaluation of existing enterprise software security techniques in the RTS context. Based on that evaluation, we conclude that randomization-based techniques are a promising class of defenses for RTS, but the variability in their run times prohibits them from being used. We are developing a novel randomization technique, called DART, to address the nondeterminism in runtime for randomization-based defenses.
1 PhD, Computer Science, University of North Carolina at Chapel Hill
2 PhD, Computer Science, Purdue University
Resilient Mission Computer
Dr. Hamed Okhravi1 and Dr. Nathan H. Burow2
MIT Lincoln Laboratory
Modern mission-critical systems leverage commodity components such as processors, operating systems, and programming languages as a cost-effective way to quickly create mission infrastructure. However, these components are inherently insecure because they were designed in an era when the need for security was not widely understood. Current bolt-on defenses can only provide partial protection at best. According to recent studies by Microsoft and Google, bugs arising from unsafe programming languages alone (i.e., memory corruption vulnerabilities) account for 70% of vulnerabilities today.
The Resilient Mission Computer (RMC) effort focuses on researching and developing a fundamentally new computer system design in which important security properties are inherent. Building on recent advancements in safe programming languages (e.g., Rust) and security-aware processor designs (e.g., tagged architectures), RMC designs and builds a memory-safe software stack, comprising the application, library, and operating system (OS) layers. To further mitigate the impact of vulnerabilities that are not prevented by memory safety, we leverage the security-aware processor to enforce fine-grained compartmentalization and privilege reduction in the OS.
1 PhD, Electrical and Computer Engineering, University of Illinois at Urbana-Champaign
2 PhD, Computer Science, Purdue University
Secure and Resilient Cloud Computing for the Department of Defense
Dr. Charles E. Munson IV1
MIT Lincoln Laboratory
The Department of Defense and the Intelligence Community are rapidly moving to adopt cloud computing to achieve substantial user benefits, such as the ability to store and access massive amounts of data, on-demand delivery of computing services, the capability to widely share information, and the ability to control the scalability of resource usage. However, cloud computing both exacerbates existing cyber security challenges and introduces new challenges.
Today, most cloud data remain unprotected and therefore vulnerable to theft, deception, or destruction. Furthermore, the homogeneity of cloud software and hardware increases the risk of a single cyberattack on the cloud causing a widespread impact. To address these challenges, Lincoln Laboratory is developing technology that will strengthen the security and resilience of cloud computing so that the Department of Defense and Intelligence Community can confidently deploy cloud services for their critical missions.
This seminar will detail the cyber security threats to cloud computing, the Laboratory's approach to cloud security and resilience, and results from one of our novel, award-winning technologies that demonstrate enhanced cyber capabilities with low overhead.
1 PhD, Computer Engineering, Georgia Institute of Technology
Web Security Origins and Evolution
Mary Ellen Zurko1
MIT Lincoln Laboratory
Computer systems and software should have security designed in from the start, giving them a firm foundation for the necessary security goals and principles based on confidentiality, integrity, and availability. After that, our systems and software evolve, functionality is added, and new technologies are integrated. Attacks occur, and evolve right along with our systems. Our security evolves with our systems, and with the attacks. Unfortunately, there is a lack of papers, publications, or case studies on the successful application of this security life cycle model.
In this seminar, a successful distributed system, the World Wide Web, serves as a case study in security evolution. Initial security features will be reviewed, with lessons learned for those that were adopted and those that were not. The transition to attack response brought a different set of features, and some early security choices turned out to not meet the security goals set out for them. New features and technologies brought basic shifts in security assumptions made in early design decisions. More recent attacks will be discussed, especially those enabled by current use and development of the web, which were largely unimagined during the initial design of the technology and its protections. This perspective provides lessons for security considerations for systems of all sizes and at all stages of maturity.
1 MS, Computer Science, Massachusetts Institute of Technology
Engineering
Technical seminars offered in engineering.
Optomechanical Systems Engineering for Space Payloads
Dr. Keith B. Doyle1
MIT Lincoln Laboratory
Lincoln Laboratory has developed a broad array of optical payloads for imaging and communication systems in support of both science and national defense applications. Space provides a particularly harsh environment for high-performing optical instruments that must survive launch and then meet demanding imaging and pointing requirements in dynamic thermal and vibration operational settings. This seminar presents some of the mechanical challenges in building optical payloads for space and draws on a history of successful optical sensors built at Lincoln Laboratory along with recent systems in the news for both imaging and communication applications. Details include an overview of recent optical payloads and their applications, optomechanical design considerations, the impact and mitigation of environmental effects, and environmental testing.
1 PhD, Engineering Mechanics, University of Arizona
Homeland Protection
Technical seminars offered in homeland protection.
This Looks Like That: Interpretable Neural Networks for Image Recognition
Dr. Jonathan K. Su1
MIT Lincoln Laboratory
Deep learning has contributed significantly to the recent success and explosion of applications in artificial intelligence and machine learning (AI/ML). Among deep learning methods, convolutional neural networks (CNNs) have enjoyed great success at computer-vision tasks. The layers of a CNN learn good features with increasing levels of semantic meaning, and the final classification layer of a CNN makes a prediction from the highest-level features. CNNs have achieved state-of-the-art performance for image recognition and object detection. Despite their excellent predictive performance, CNNs represent the ultimate black box, whose inner workings are notoriously difficult to understand. This characteristic can make users and experts reluctant to trust and accept such AI/ML techniques. A variety of post hoc ("after the event") techniques, such as saliency maps, have been proposed as ways to explain what a CNN is doing. These techniques do not modify the CNN architecture or training, but they try to show the influence of different input pixels on the output of the CNN. However, other research has shown that such methods can be misleading or even misled.
To address the above shortcomings, and in collaboration with Duke University, we developed an interpretable classification layer for image-recognition CNNs. Unlike a conventional CNN or post hoc methods, our network is deliberately designed and trained to be understandable by humans. We call it a prototypical part network because its prediction process ("this looks like that") resembles how a human might solve an image-recognition task: find parts of the test image that are similar to example parts from classes that one has seen before, and use the part similarities to choose the class of the test image. Given a test image, the network looks for prototypical parts that were learned during training, and it combines evidence from the prototypical parts to make its prediction. Moreover, network training only uses image-level class labels: it learns the prototypical parts automatically, without requiring part-level annotations.
We demonstrated the model on a public dataset containing 200 species of birds, and we showed that it can achieve comparable accuracy to conventional CNNs while also providing an interpretable prediction process.
1 PhD, Electrical Engineering, Georgia Institute of Technology
Human Language Technology
Technical seminars offered in human language technology.
Signal Processing for the Measurement of Characteristic Voice Quality
Dr. Nicolas Malyska1 and Dr. Thomas F. Quatieri2
MIT Lincoln Laboratory
The quality of a speaker's voice communicates to a listener information about many characteristics, including the speaker's identity, language, dialect, emotional state, and physical condition. These characteristic elements of a voice arise due to variations in the anatomical configuration of a speaker's lungs, voice box, throat, tongue, mouth, and nasal airways, and the ways in which the speaker moves these structures. The voice box, or larynx, is of particular interest in voice quality, as it is responsible for generating variations in the excitation source signal for speech.
In this seminar, we will discuss mechanisms by which voice-source variations are generated, appear in the acoustic signal, and are perceived by humans. Our focus will be on using signal processing to capture acoustic phenomena resulting from the voice source. The presentation will explore several applications that build upon these measurement techniques, including turbulence-noise component estimation during aperiodic phonation, automatic labeling for regions of irregular phonation, and the analysis of pitch dynamics.
1 PhD, Health Sciences and Technology, Massachusetts Institute of Technology
2 ScD, Electrical Engineering, Massachusetts Institute of Technology
Vocal Biomarkers of Neurological Conditions Based on Motor Timing and Coordination
Dr. Thomas F. Quatieri1
MIT Lincoln Laboratory
Toward the goal of noninvasive, objective means to detect and monitor psychological, neurodegenerative, and neurotraumatic conditions, Lincoln Laboratory is developing vocal biomarkers that reflect a change in brain functioning as manifested in motor control. Specifically, vocal features are based on timing and coordination of articulatory components of vocal expression, motivated by the hypothesis that these relations are associated with neural coordination across different parts of the brain essential in speech motor control. Timing- and coordination-based features are extracted using behavioral measures from the acoustic signal and from associated facial measures during speaking, but also from neurocomputational models of speech production. These models are governed by motor control parameters that are constrained by neurophysiology and possibly complement acoustic-derived features. This presentation will give the foundation for extracting our vocal features and will illustrate use of these markers by an example in each of the three above application areas: major depressive disorder, Parkinson’s disease, and mild traumatic brain injury. The measurement and modeling framework may provide a common neurophysiological feature basis in the detecting and monitoring of neurological disease from speech, while potentially providing features to distinguish across disorders and to monitor and predict the effect of treatments.
1 ScD, Electrical Engineering, Massachusetts Institute of Technology
Machine Learning and Artificial Intelligence
Technical seminars offered in machine learning and artificial intelligence.
Active Learning Pipeline for Brain Mapping in a High-Performance Computing Environment
Dr. Laura J. Brattain1 and Dr. Lars A. Gjesteby2
MIT Lincoln Laboratory
One of the top priorities of the BRAIN Initiative led by the U.S. government is to map human brains at multiple scales. Detailed maps of connected neurons in both local circuits and distributed brain systems, once reconstructed, will facilitate understanding of the relationship between neuronal structure and function. Advances in brain imaging techniques have made it possible to image the brain structures at high throughput (on the order of terabytes/hour), over a large field of view (multiple brain regions), and at high resolutions (cellular or subcellular). Datasets of a whole human brain are estimated to be on the order of up to several petabytes, which is effectively impossible to process manually. Image processing and visualization techniques are being developed to assist the neuroscientific discovery. While there are many methods to analyze high-resolution neuroimaging data, accurate neuron segmentation and tracing at scale are some of the fundamental processing tasks that still need to be optimized.
This seminar will describe a scalable active learning pipeline prototype for large-scale brain mapping that leverages high-performance computing power. This pipeline enables high-throughput evaluation of algorithm results, which, after human review, are used for iterative machine learning model training. Benchmark testing of image processing using parallel MATLAB shows that a hundred-fold increase in throughput (10,000%) can be achieved while total processing time only increases by 9%, indicating robust scalability. This pipeline has the potential to greatly reduce the manual annotation burden and improve the overall performance of machine learning–based brain mapping.
1 PhD, Biological Engineering, Harvard University
2 PhD, Biomedical Engineering, Rensselaer Polytechnic Institute
AI and Ultrasound-Guided Medical Interventions for the Battlefield
Dr. Lars A. Gjesteby1 and Dr. Laura J. Brattain2
MIT Lincoln Laboratory
Future conflicts will require prolonged field care of up to three days before evacuation. To address this new challenge, service members will need assistive devices to efficiently provide complex care with minimal training. We are prototyping a common platform to provide life-saving interventions for the top three causes of preventable battlefield deaths: hemorrhage, loss of airway, and tension pneumothorax. The system integrates portable ultrasound, artificial intelligence implemented in real-time software, and miniature robotics in a low-size, -weight, and -power package. The initial focus is to prototype a semiautomated central vascular access system for the femoral vein and/or artery, as specified by user input. The system guides a user to position the device over the target blood vessel, inserts a needle, and confirms needle placement. Accurate real-time vessel tracking and needle path guidance have been achieved in porcine models. In addition, a prospective porcine study was performed in which the user was blinded to the ultrasound image and relied solely on directional outputs from the algorithm as guidance on probe movement. The goal of the system was to center the target vessel in the image along the line of needle insertion. Once the predicted location was reached, the image was turned back on, and it was confirmed that the vessel was in an ideal location for cannulation. A safety checking algorithm was also tested and verified to alert the user to avoid needle insertion when vessels were overlapping in a vertical orientation. A video demonstration of the system will be provided.
1 PhD, Biomedical Engineering, Rensselaer Polytechnic Institute
2 PhD, Biological Engineering, Harvard University
Automated Machine Learning and Enabling Data Infrastructure
Dr. Swaroop Vattam1 and Dr. Pooya Khorrami2
MIT Lincoln Laboratory
Machine learning is becoming increasingly important with the abundance of data, while the number of skilled machine learning experts is lagging. Automated machine learning (AutoML) systems hold the promise of allowing subject-matter experts to derive insights and value from their data without necessarily having to rely on machine-learning experts or develop deep expertise in machine learning themselves. AutoML research investigates whether it is possible to automate the process of developing end-to-end machine learning pipelines beginning with raw data and ending with well-calibrated predictive models. This seminar will present an overview of the challenges of AutoML research and the data infrastructure required to support the development of robust AutoML methods. We will also discuss the state-of-the-art AutoML systems, their limitations, and our research directions intended to push the boundaries of the AutoML research.
1 PhD, Computer Science, Georgia Institute of Technology
2 PhD, Electrical and Computer Engineering, University of Illinois at Urbana-Champaign
Combating Illicit Marketplaces on the Deep and Dark Web Using Machine Learning
Dr. Charlie Dagli1, Dr. Lin Li2, and Dr. Joseph Campbell3
MIT Lincoln Laboratory
Increasingly, more and more illicit economic activity is being mediated online. Illicit marketplaces on the deep web and dark web provide anonymous platforms for the sale and purchase of illegal goods and services. Unfortunately, traditional technologies for indexing and search are not sufficient for combating illicit activities observed in these marketplaces. In this talk, we will describe machine learning–based approaches for enabling law enforcement to counter activities observed in these marketplaces. In particular, we will discuss technologies for multiplatform persona linking (e.g., determining if a user in one illicit marketplace is likely to be the same person as a different user in another illicit marketplace). We will also discuss uncued discovery of organizations operating in illicit marketplaces. Additionally, we will discuss how these technologies were used in the Defense Advanced Research Projects Agency’s Memex program and how Memex is providing significant impact to law enforcement agencies combatting illicit activity online.
1 PhD, Electrical and Computer Engineering, University of Illinois at Urbana-Champaign
2 PhD, Electrical and Computer Engineering, University of California, Davis
3 PhD, Electrical Engineering, Oklahoma State University
Fast AI: Enabling Rapid Prototyping of AI Solutions
Dr. Vijay Gadepally1, Dr. Jeremy Kepner2, Dr. Albert Reuther3, and Dr. Siddharth Samsi4
MIT Lincoln Laboratory
Recent advances in artificial intelligence (AI) and machine learning (ML) have largely relied on the access to massive quantities of data and processing available in high-performance computing centers such as the Lincoln Laboratory Supercomputing Center (LLSC). Coupled with advanced algorithms, AI and ML technologies are making a significant impact on various government missions. As a world leader in developing high-performance computing (HPC) tools that are easy to use without compromising performance, the LLSC has been developing a number of novel technologies to enable the rapid prototyping of AI solutions for a variety of Lincoln Laboratory and MIT Campus missions. This research in “fast AI” is pillared on modern computing, data management, and interfaces and algorithms. This seminar will discuss the AI landscape from the viewpoint of the LLSC along with an overview of various research thrusts across the LLSC that enable rapid prototyping of AI solutions.
1 PhD, Electrical and Computer Engineering, The Ohio State University
2 PhD, Physics, Princeton University
3 PhD, Electrical and Computer Engineering, Purdue University
4 PhD, Electrical Engineering, The Ohio State University
Machine Learning Applications in Aviation Weather and Traffic Management
Dr. Mark S. Veillette1
MIT Lincoln Laboratory
Adverse weather accounts for the majority of air traffic delays in the United States. When weather is expected to impact operations, air traffic managers (ATMs) are often faced with an overload of weather information required to make decisions. These data include multiple numerical weather model forecasts, satellite observations, Doppler radar, lightning detections, wind information, and other forms of meteorological data. Many of these data contain a great deal of uncertainty. Absorbing and utilizing these data in an optimal way is challenging even for experienced ATMs.
This talk will provide two examples of using machine learning to assist ATMs. In our first example, data from weather satellites are combined with global lightning detections and numerical weather models to create Doppler radar-like displays of precipitation in regions outside the range of weather radar. In the second example, multiple weather forecasts are used to refine the prediction of airspace capacity within selected regions of airspace. Examples and challenges of data collection, translation, modelling, and operational prototype evaluations will be discussed.
1 PhD, Mathematics, Boston University
Machine Learning Approaches for Source Attribution of Forensic-Relevant Materials
Dr. Joshua R. Dettman1 and Amanda M. Casale2
MIT Lincoln Laboratory
Source attribution of materials involved in a crime using chemical and physical signatures can be an important technique for generating investigative leads on the origination location (source) of the material and potential association with a suspect. Prior proof-of-concept studies of this type have been performed for a variety of materials of forensic and commercial importance. Chemometric or, more generally, machine learning (ML) techniques can be used to extract conclusions from the relatively high-dimensional data collected and to estimate the probability a sample originated from a specific source. Lincoln Laboratory is developing composite ML sourcing algorithms utilizing multiple tiers of supervised ML classification methods, data conditioning, bootstrap supplementation of reference data, and probability-weighted results fusion to estimate the probability of source membership of fertilizer samples based on chemical and physical signature data. The individual signature and fused relative probabilities, and the raw signature data, are presented to the examiner in a user-focused web interface for further analysis and a source assignment decision in a functioning sourcing system.
For a pilot effort, reference samples of fertilizer were obtained from known commercial and industrial sources. After evaluation of many potential signature types, the final set of signatures collected from the reference samples was comprised of reflectance spectra (color), particle morphology (size/shape), and trace element composition. For the samples from 23 fertilizer sources that were tested, the correct source is chosen in cross-validation as the most likely source 87% of the time and is an average of 2.3 times as likely as the most probable incorrect source. These relatively accurate and confident souring results, even under conservative performance estimate conditions, indicate the promise of the signatures and composite ML sourcing algorithm for determining the source of unknown fertilizer samples.
1 PhD, Chemistry, The Ohio State University
2 MS, Statistics, Harvard University
Speech Enhancement Using Deep Neural Networks
Dr. Jonas Borgstron1 and Dr. Michael Brandstein2
MIT Lincoln Laboratory
Speech signals captured by law enforcement or the Intelligence Community are typically degraded by additive noise and reverberation. Speech enhancement aims to suppress such distortion, leading to improved intelligibility and reduced fatigue in analysts. This talk will review single-channel speech enhancement and discuss recent advances made by Lincoln Laboratory that leverage deep neural networks.
1 PhD, Electrical Engineering, University of California, Los Angeles
2 PhD, Electrical Engineering, Brown University
Synthetic Data Augmentation for AI
Dr. Pooya Khorrami1 and Dr. Michael Brandstein2
MIT Lincoln Laboratory
One common challenge when training machine learning systems, particularly ones that use deep neural networks, is acquiring a large amount of curated labeled data. In many cases, obtaining large-scale datasets with high fidelity labels can be a time-consuming or even impossible task. As a result, many researchers have tried to address this issue by proposing techniques to efficiently generate large amounts of synthetic training data. In this talk, we examine two different paradigms for generating synthetic data (simple image transformations and generative adversarial networks) and assess the benefits they provide in two application domains: American Sign Language recognition and face recognition. Our findings show that even simple augmentation techniques can improve recognition accuracy when the amount of available data is low, but having more data allows for more complicated approaches to be successful.
1 PhD, Electrical and Computer Engineering, University of Illinois at Urbana-Champaign
2 PhD, Electrical Engineering, Brown University
The Case for a New AI + Cyber + Bio Security Paradigm
Dr. William W. Streilein1
MIT Lincoln Laboratory
Recent advances in biotechnology, computational ability, and artificial intelligence (AI) have led to an explosion in medical, scientific, and manufacturing capabilities that together support a thriving national bioeconomy. However, the merger of the previously independent domains of biology, cyber, and AI also brings about new threat surfaces for a sufficiently motivated attacker. Beyond the typical cyber-physical or AI-physical overlap, the convergence of cyber and AI within the bio domain gives rise to a unique and increased concern; only in the biological domain is there the ability for the adversary to wreak havoc against four capabilities that are central to the bioeconomy. First, there is the ability to impact the capacity “to make,” or manufacture, bio-related materials, such as through workflow disruption, or equipment destruction, resulting in, e.g., failed countermeasure production. Second, the ability “to detect” natural or manmade toxins and pathogens can be attacked via detector tampering or an AI evasion attack, leaving a public vulnerable. Third, adversaries can cause biotechnical systems “to reveal” private, proprietary, or sensitive information at a speed and scale not previously encountered, enabling use for nefarious purposes or to gain economic advantage. Finally, and unique to bio, there is an opportunity to disrupt the ability “to learn” from biology by attacking the scientific method itself. Here, adversarial AI attacks that poison training data with mislabeled samples can result in the wrong model of a biological process being learned. This talk motivates the need to develop new approach to biosecurity that considers the interdependent nature of the convergence of cyber and AI attack surfaces within the bio domain, rather than current stovepiped approaches, such as malware detection within the cyber domain.
1 PhD, Cognitive and Neural Systems, Boston University
Radar and Signal Processing
Technical seminars offered in radar and signal processing.
Adaptive Array Estimation
Keith Forsythe1
MIT Lincoln Laboratory
Parameter estimation is a necessary step in most surveillance systems and typically follows detection processing. Estimation theory provides parameter bounds specifying the best achievable performance and suggests maximum-likelihood (ML) estimation as a viable strategy for algorithm development. Adaptive sensor arrays introduce the added complexity of bounding and assessing parameter estimation performance (1) in the presence of limiting interference whose statistics must be inferred from measured data and (2) under uncertainty in the array manifold for the signal search space. This talk focuses on assessing the mean-squared-error (MSE) performance at low and high signal-to-noise ratio (SNR) of nonlinear ML estimation that (1) uses the sample covariance matrix as an estimate of the true noise covariance and (2) has imperfect knowledge of the array manifold for the signal search space. The method of interval errors is used to predict MSE performance and is shown to be remarkably accurate well below estimation threshold. SNR loss in estimation performance due to noise covariance estimation is quantified and is shown to be quite different from analogous losses obtained for detection. Lastly, a discussion of the asymptotic efficiency of ML estimation is also provided in the general context of misspecified models, the most general form of model mismatch.
1 SM, Mathematics, Massachusetts Institute of Technology
Advanced Embedded Computing
Dr. Paul Monticciolo1, Dr. William S. Song2, and Dr. Ramamurthy Bhagavatula3
MIT Lincoln Laboratory
Embedded computing continues to influence and change the way we live, as evidenced by its ubiquitous usage in consumer and industrial electronics and its ability to enable new growth areas, such as autonomous vehicles and virtual reality systems. Over the past several decades, the Department of Defense (DoD) has been leveraging advances in embedded computing to meet national needs in intelligence, surveillance, and reconnaissance; electronic warfare; and command-and-control applications that require tremendous compute capabilities in highly constrained size, weight, and power environments. Furthermore, defense platforms such as aircrafts and ships can have lifetimes of upwards of 50 years; hence, hardware and software architectures must be designed to provide both the performance and flexibility to support new applications and regular technology upgrades over such a long time frame.
Lincoln Laboratory’s Embedded and Open Systems Group has been a long-time leader in the design, development, and implementation of defense electronic components and systems. In this seminar, we will discuss advanced real-time hardware and software technologies and codesigned hardware and software system solutions. First, we will provide an overview of DoD-oriented embedded computing challenges, technologies, and trends. We will then explore extremely power-efficient custom application-specific integrated circuit and emerging system-on-chip solutions that parallel approaches used in smart phones. Heterogeneous system solutions that leverage field-programmable gate arrays, graphics processing units, and multi-core microprocessor components will then be addressed. Open standards-based software architectures that can meet real-time performance requirements, enable code portability, and enhance programmer productivity will be discussed. Representative implementation examples will be provided in each section. Our presentation will conclude with some prognostication on the future of embedded computing.
1 PhD, Electrical Engineering, Northeastern University
2 DSc, Electrical Engineering, Massachusetts Institute of Technology
3 PhD, Electrical Engineering, Carnegie Mellon University
Bioinspired Resource Management for Multiple-Sensor Target Tracking Systems
Dr. Dana Sinno1 and Dr. Hendrick C. Lambert2
MIT Lincoln Laboratory
We present an algorithm, inspired by self-organization and stigmergy observed in biological swarms, for managing multiple sensors tracking large numbers of targets. We have devised a decentralized architecture wherein autonomous sensors manage their own data collection resources and task themselves. Sensors cannot communicate with each other directly; however, a global track file, which is continuously broadcast, allows the sensors to infer their contributions to the global estimation of target states. Sensors can transmit their data (either as raw measurements or some compressed format) only to a central processor where their data are combined to update the global track file. We outline information-theoretic rules for the general multiple-sensor Bayesian target tracking problem and provide specific formulas for problems dominated by additive white Gaussian noise. Using Cramér-Rao lower bounds as surrogates for error covariances and numerical scenarios involving ballistic targets, we illustrate that the bioinspired algorithm is highly scalable and performs very well for large numbers of targets.
1 PhD, Electrical Engineering, Arizona State University
2 PhD, Applied Physics, University of California, San Diego
Multilithic Phased Array Architectures for Next-Generation Radar
Dr. Sean M. Duffy1
MIT Lincoln Laboratory
Phased array antennas provide significant operational capabilities beyond those achievable with dish antennas. Civilian agencies, such as the Federal Aviation Administration and Department of Homeland Security, are investigating the feasibility of using phased array radars to satisfy their next-generation needs. In particular, the Multifunction Phased Array Radar (MPAR) effort aims to eliminate nine different dish-based radars and replace them with radars employing a single, low-cost MPAR architecture. Also, unmanned air system operation within the National Airspace System requires an airborne sense-and-avoid (ABSAA) capability ideally satisfied by a small low-cost phased array.
Two example phased array panels are discussed in this talk. The first is the MPAR panel—a scalable, low-cost, highly capable S-band panel. This panel provides the functionality to perform the missions of air surveillance and weather surveillance for the National Airspace System. The second example is the ABSAA phased array, a low-cost Ku-band panel for unmanned air system collision avoidance radar.
The approach used in these phased arrays eliminates the drivers that lead to expensive systems. For example, the high-power amplifier is fabricated in a high-volume foundry and mounted in a surface mount package, thereby allowing industry-standard low-cost assembly processes. Also, all our integrated circuits contain multiple functions to save on semiconductor space and board-level complexity. Finally, the systems' multilayered printed circuit board assemblies combine the antenna, microwave circuitry, and integrated circuits, eliminating the need for hundreds of connectors between the subsystems; this streamlined design enhances overall reliability and lowers manufacturing costs.
1 PhD, Electrical Engineering, University of Massachusetts Amherst
Parameter Bounds Under Misspecified Models
Keith Forsythe1
MIT Lincoln Laboratory
Parameter bounds are traditionally derived assuming perfect knowledge of data distributions. When the assumed probability distribution for the measured data differs from the true distribution, the model is said to be misspecified; mismatch at some level is inevitable in practice. Thus, several authors have studied the impact of model misspecification on parameter estimation. Most notably, Peter Huber explored in detail the performance of maximum-likelihood (ML) estimation under a very general form of misspecification; he showed consistency and normality, and derived ML estimation’s asymptotic covariance that is often referred to as the celebrated "sandwich covariance."
The goal of this talk is to consider the class of non-Bayesian parameter bounds emerging from the covariance inequality under the assumption of model misspecification. Casting the bound problem as one of constrained minimization is likewise considered. Primary attention is given to the Cramér-Rao bound (CRB). It is shown that Huber's sandwich covariance is the misspecified CRB and provides the greatest lower bound (tightest) under ML constraints. Consideration of the standard circular complex Gaussian ubiquitous in signal processing yields a generalization of the Slepian-Bangs formula under misspecification. This formula, of course, reduces to the usual one when the assumed distribution is in fact the correct one. The framework is outlined for consideration of the Barankin/Hammersley-Chapman-Robbins, Bhattacharyya, and Bobrovsky-Mayer-Wolf-Zakai bound under misspecification.
1 SM, Mathematics, Massachusetts Institute of Technology
Polarimetric Co-location Layering: A Practical Algorithm for Mitigation of Low Grazing Angle Sea Clutter
Molly Crane1
MIT Lincoln Laboratory
Traditional detection schemes in conventional maritime surveillance radars suffer serious performance degradation because of sea clutter, especially in low-grazing-angle geometries. In such geometries, typical statistical assumptions regarding sea clutter backscatter do not hold. Trackers can be overwhelmed by false alarms, while objects of interest may be challenging to detect amongst sea clutter. Despite numerous attempts over several decades to devise a means of mitigating the effects of low-grazing-angle sea clutter on traditional detection schemes, minimal progress has been made in developing an approach that is robust and practical.
To explore whether polarization information might offer an effective means of enhancing target detection in sea clutter, Lincoln Laboratory collected a fully polarimetric X-band radar dataset on the Atlantic coast of Massachusetts' Cape Ann in October 2015. The dataset spans multiple bandwidths, multiple sea states, and various targets of opportunity. Leveraging this dataset, Lincoln Laboratory developed an algorithm dubbed polarimetric colocation layering that retains detections on objects of interest while reducing the number of false alarms in a conventional single-polarization radar by as many as two orders of magnitude. Polarimetric colocation layering is robust across waveform bandwidths and sea states. Moreover, this algorithm is practical: It can plug directly into the standard radar signal processing chain.
1 PhD, Computer Engineering, Boston University
Polynomial Rooting Techniques for Adaptive Array Direction Finding
Dr. Gary F. Hatke1
MIT Lincoln Laboratory
Array processing has many applications in modern communications, radar, and sonar systems. Array processing is used when a signal in space, be it electromagnetic or acoustic, has some spatial coherence properties that can be exploited (such as far-field plane wave properties). The array can be used to sense the orientation of the plane wave and thus deduce the angular direction to the source. Adaptive array processing is used when there exists an environment of many signals from unknown directions as well as noise with unknown spatial distribution. Under these circumstances, classical Fourier analysis of the spatial correlations from an array data snapshot (the data seen at one instance in time) is insufficient to localize the signal sources.
In estimating the signal directions, most adaptive algorithms require computing an optimization metric over all possible source directions and searching for a maximum. When the array is multidimensional (e.g., planar), this search can become computationally expensive, as the source direction parameters are now also multidimensional. In the special case of one-dimensional (line) arrays, this search procedure can be replaced by solving a polynomial equation, where the roots of the polynomial correspond to estimates of the signal directions. This technique had not been extended to multidimensional arrays because these arrays naturally generated a polynomial in multiple variables, which does not have discrete roots.
This seminar introduces a method for generalizing the rooting technique to multidimensional arrays by generating multiple optimization polynomials corresponding to the source estimation problem and finding a set of simultaneous solutions to these equations, which contain source location information. It is shown that the variance of this new class of estimators is equal to that of the search techniques they supplant. In addition, for sources spaced more closely than a Rayleigh beamwidth, the resolution properties of the new polynomial algorithms are shown to be better than those of the search technique algorithms.
1 PhD, Electrical Engineering, Princeton University
Synthetic Aperture Radar
Dr. Gerald R. Benitz1
MIT Lincoln Laboratory
Lincoln Laboratory is investigating the application of phased array technology to improve the state of the art in radar surveillance. Synthetic aperture radar (SAR) imaging is one mode that can benefit from a multiple-phase-center antenna. The potential benefits are protection against interference, improved area rate and resolution, and multiple simultaneous modes of operation.
This seminar begins with an overview of SAR, giving the basics of resolution, collection modes, and image formation. Several imaging examples are provided. Results from the Lincoln Multimission ISR Testbed (LiMIT) X-band airborne radar are presented. LiMIT employs an eight-channel phased array antenna and records 180 MHz bandwidth from each channel simultaneously. One result employs adaptive processing to reject wideband interference, demonstrating recovery of a corrupted SAR image. Another result employs multiple simultaneous beams to increase the area of the image beyond the conventional limitation that is due to the pulse repetition frequency. Areas that are Doppler ambiguous can be disambiguated by using the phased array antenna.
1 PhD, Electrical Engineering, University of Wisconsin–Madison
Advanced Technology
Technical seminars offered in advanced technology.
3D Heterogeneous Integration Technology for Next-Generation RF Systems
Dr. Christopher Gailbraith1, Jeffrey Knecht2, Dr. Shireen M. Warnock3, and Donna-Ruth Yost4
MIT Lincoln Laboratory
Meeting the technical demands of modern phased arrays while keeping the costs affordable is becoming increasingly difficult. The Laboratory has made substantial inroads to addressing these needs with arrays based on a tile architecture, but additional challenges appear when expanding this approach to higher frequency and power domains. An increase in power favors the use of gallium nitride for select components, and an increase in the operational frequency makes it difficult to fit the electronics components within the dimensions of a tileable array. Continuing to keep costs low requires a rethinking of the choice of semiconductor materials. Meeting the space constraints requires creative ways to integrate components without sacrificing capability. In this presentation, we will discuss our development efforts to address these challenges with the use of 3D heterogeneous integration and the benefits this approach provides.
1 PhD, Electrical Engineering, University of Michigan
2 PhD, Electrical Engineering, University of Michigan
3 MS, Physics, Northeastern University
4 PhD, Electrical Engineering, Massachusetts Institute of Technology
5 BS, Material Science/Metallurgy, Cornell University
Accelerated Discovery of Advanced Materials at MIT Lincoln Laboratory
Dr. Mark J. Polking1 and Dr. Kevin J. Tibbetts2
MIT Lincoln Laboratory
Throughout history, new materials have served as catalysts for disruptive technological progress, but the materials discovery process typically spans years to decades because of its reliance on physical intuition and serendipitous discovery. At Lincoln Laboratory, our vision is to accelerate the pace of materials discovery by orders of magnitude through a systematic, iterative, and materials-generic pathway from identification of application-specific material needs to material integration. This approach includes rigorous systems analysis to identify material property gaps, machine learning–enabled rapid screening of material candidates, property predictions with first-principles materials theory, and experimental synthesis and validation to confirm property predictions and system-level utility. We are applying this methodology to application areas including optical shutters for terrain mapping, identification of a replacement material for silicon for next-generation microelectronics, and optical filters for detection of ground-based laser threats. This seminar will provide an overview of our holistic approach to materials discovery and will highlight specific new tools for accelerated materials discovery, including the application of machine learning and artificial intelligence to materials screening and a highly reconfigurable materials discovery tool. Emphasis will also be placed on materials discoveries enabled by this methodology, such as new families of materials capable of large reflectivity modulation with low optical losses at visible frequencies.
1 PhD, Materials Science and Engineering, University of California, Berkeley
2 PhD, Materials Science and Engineering, Massachusetts Institute of Technology
Electromagnetic Vector Antenna and Constrained Maximum-Likelihood Imaging for Radio Astronomy
Dr. Frank C. Robey1, Dr. Alan J. Fenn2, and Dr. Mark J. Silver3
MIT Lincoln Laboratory
Mary E. Knapp4, Dr. Frank D. Lind5, and Dr. Ryan A. Volz6
MIT
Radio astronomy at frequencies below 50 MHz provides a window into nonthermal processes in objects ranging from planets to galaxies. These frequencies also provide insight into the formation of the universe. Ground-based arrays cannot adequately observe astronomical sources below about 20 MHz because of ionospheric perturbation and shielding; therefore, the sky has not been mapped with high angular resolution below that frequency. Radio astronomy at these frequencies must be accomplished in space. With space-based sensing, the cost of each satellite is high, and consequently we desire to maximize the information from each satellite. This presentation discusses designs for mapping the sky from space using electromagnetic vector sensors. These sensors measure the full electric- and magnetic-field vectors of incoming radiation and enable measurement with reasonable angular resolution from a compact sensor with a single-phase center. A model for radio astronomy imaging is introduced, and the constraints imposed by Maxwell's equations and vector sensing of an electromagnetic field are explored. This presentation shows that the covariance matrix inherent in the stochastic process must lie in a highly constrained subset of allowable positive definite covariance matrices. Results are shown that use an expectation maximization to form images consistent with a covariance matrix that satisfies the constraints. A conceptual design for a spacecraft to map the sky at frequencies below the ionospheric cutoff is discussed, along with concept development progress.
1 PhD, DSc, Electrical Engineering, Washington University in St. Louis
2 PhD, Electrical Engineering, The Ohio State University
3 PhD, Aerospace Engineering, University of Colorado Boulder
4 Graduate Student, Earth, Atmosphere, and Planetary Sciences Department, Massachusetts Institute of Technology
5 PhD, Geophysics, University of Washington
6 PhD, Aeronautics/Astronautics, Stanford University
Functional Fibers and Fabrics at MIT Lincoln Laboratory
Lauren Cantley1, Daniel Freeman2, and Michael Rickley3
MIT Lincoln Laboratory
At Lincoln Laboratory, we are pushing the frontiers of advanced functional fibers and fabrics from the age-old textiles to next-generation fabric systems with capabilities spanning sensing, communications, health monitoring, and more. To enable this development, our team focuses on multi-material fiber device fabrication—fibers with internal domains containing semiconductors, metals, and insulators. By judicious selection of materials and processing conditions, individual fibers with nanometer-scale semiconductor features can be integrated into kilometer lengths, thus enabling electronic and opto-electronic fiber devices. Weaving these fibers into fabrics enables textiles with sophisticated properties tailored for national security mission areas. This seminar will overview this new but rapidly emerging field and will include several advanced fiber and fabric use cases, including ocean sensing, wearable chemical sensing, and health monitoring.
1 PhD, Mechanical Engineering, Boston University
2 PhD, Biomedical Engineering, Boston University
3 MBA, University of Massachusetts Lowell
Geiger-Mode Avalanche Photodiode Arrays for Imaging and Sensing
Dr. Brian F. Aull1
MIT Lincoln Laboratory
This seminar discusses the development of arrays of silicon avalanche photodiodes integrated with digital complementary metal-oxide semiconductor (CMOS) circuits to make focal planes with single-photon sensitivity. The avalanche photodiodes are operated in Geiger mode; they are biased above the avalanche breakdown voltage so that the detection of a single photon leads to a discharge that can directly trigger a digital circuit. The CMOS circuits to which the photodiodes are connected can either time stamp or count the resulting detection events. Applications include three-dimensional imaging using laser radar, wavefront sensing for adaptive optics, and optical communications.
1 PhD, Electrical Engineering, Massachusetts Institute of Technology
High-Power Laser Technology at MIT Lincoln Laboratory
Dr. T. Y. Fan1 and Dr. Darren A. Rand2
MIT Lincoln Laboratory
This seminar includes an overview of high-power laser technology development at Lincoln Laboratory. Two topics will be emphasized: laser beam combining and cryogenically cooled solid-state lasers. Beam combining, taking the outputs from arrays of lasers and combining them into a single beam with near-ideal propagation characteristics, has been pursued for many decades, but its promise has started to become a reality only within the last decade. Beam combining of both fiber and semiconductor arrays using both wavelength and coherent beam-combining approaches will be discussed. Cryogenically cooled solid-state lasers enable high-efficiency and high-average-power lasers while maintaining excellent beam quality. This approach is particularly applicable for developing laser sources with both high peak and average power.
1 PhD, Electrical Engineering, Stanford University
2 PhD, Electrical Engineering, Princeton University
Integrated Photonics for Sensing, Communications, and Signal Processing
Dr. Christopher Heidelberger1, Dr. Paul Juodawlkis2, Dr. Boris (Dave) Kharas3, Dr. Cheryl Sorace-Agaska4, Dr. Reuel Swint5, and Dr. Siva Yegnanarayanan6
MIT Lincoln Laboratory
Integrated photonics involves the aggregation of multiple optical or photonic components onto a common substrate using either monolithic or hybrid integration techniques. Common components include lasers, optical modulators, detectors, filters, splitters and combiners, couplers, and optical isolators. These components are typically connected using optical waveguides built into the substrate platform to create a photonic integrated circuit (PIC). Relative to optical circuits constructed from discrete components connected using optical fibers, PICs have a number of advantages, including reduced size and weight, reduced fiber interfaces and associated fiber-pigtailing cost, and improved environmental stability for coherent optical signal processing. These advantages are strengthened by combining PICs with electronic integrated circuits via several electronic-photonic integration techniques (wire bonding, flip chip, wafer bonding, or monolithic). Depending on the desired functions, PICs can be fabricated from a variety of materials including silicon, silicon nitride, and compound semiconductors (e.g., indium phosphide, gallium arsenide, gallium nitride, and their associated ternary and quaternary compounds).
In this seminar, we will provide an introduction to integrated photonics technology, including design, fabrication, packaging, and characterization. We will describe resources and capabilities to develop silicon, silicon nitride, compound-semiconductor, and hybrid PICs at Lincoln Laboratory using in-house fabrication resources. We will also describe several government applications of integrated photonics (e.g., remote sensing, free-space communications, and signal processing) and how these are similar and different from commercial applications.
1 PhD, Materials Science and Engineering, Massachusetts Institute of Technology
2 PhD, Electrical Engineering, Georgia Institute of Technology
3 PhD, Materials Science, State University of New York at Stony Brook
4 PhD, Electrical Engineering, Massachusetts Institute of Technology
5 PhD, Electrical Engineering, University of Illinois at Urbana-Champaign
6 PhD, Electrical Engineering, University of California, Los Angeles
Laser Development at MIT Lincoln Laboratory
Dr. T. Y. Fan1
MIT Lincoln Laboratory
Lincoln Laboratory has been an innovator in laser technology since the laser’s earliest days. The Laboratory produced one of the early demonstrations of a semiconductor laser in 1962, which coincided with a demonstration of an optical communication link using an LED, and was soon followed by the first diode-pumped solid-state laser. Other developments in semiconductor lasers underpinned the laser transmitter technology used for fiber optic communications at 1.3- and 1.5-micron wavelengths. In the realm of solid-state lasers, the Laboratory has been responsible for the development of titanium-doped sapphire and ytterbium-doped yttrium aluminum garnet lasers, and for passively Q-switched microchip lasers, which have all found broad use. More recently, the Laboratory has been a leader in beam combining of fiber and semiconductor arrays using both wavelength and coherent beam combining approaches, which have found applications in materials processing and directed energy. This talk will provide a historical overview of these and other laser innovations and provide a snapshot for the future.
1 PhD, Electrical Engineering, Stanford University
New Fabrication Platforms and Processes
Dr. Bradley Duncan1 and Dr. Lalitha Parameswaran2
MIT Lincoln Laboratory
Additive manufacturing techniques, such as 3D printing and 3D assembly, have transformed production by breaking long-established rules of economies of scale and manufacturing complexity. Lincoln Laboratory has a suite of ongoing efforts to develop novel materials and processes for the construction of nonplanar 3D devices and integrated assemblies and systems using our expertise in material science, semiconductor fabrication, chemistry, and mechanical and electrical engineering. Our goal is to develop processes that enable "one-stop" fabrication of complete systems with both mechanical and electrical functionality. Ongoing programs include design of materials and processes for concurrent 3D printing of dissimilar materials and novel tailored material gradients; development of high-resolution, radio-frequency 3D-printed structures for low-size, -weight, and -power applications extending to the THz range; development of novel microplasma-based sputtering tools for direct write on nonstandard substrates; and development of reconstructed wafer processes to overcome die-size limitations and enable scaling of digital integrated circuits to large formats.
1 PhD, Organic Chemistry, University of Massachusetts Amherst
2 PhD, Electrical Engineering, Massachusetts Institute of Technology
Slab-Coupled Optical Waveguide Devices and Their Applications
Dr. Andrew Benedick1, Kevin Creedon2, Dr. Paul Juodawlkis3, Dr. Gary Smith4, Dr. Reuel Swint5, and Dr. George Turner6
MIT Lincoln Laboratory
For more than two decades, Lincoln Laboratory has been developing new classes of high-power semiconductor optoelectronic emitters and detectors based on the slab-coupled optical waveguide (SCOW) concept. The key characteristics of the SCOW design include (1) the use of a planar slab waveguide to filter the higher-order transverse modes from a large rib waveguide, (2) a low overlap between the optical mode and the active layers, and (3) a low excess optical loss. These characteristics enable waveguide devices having large (> 5 × 5 μm) symmetric fundamental-mode operation and long length (~1 cm). These large dimensions, relative to conventional waveguide devices, allow efficient coupling to optical fibers and external optical cavities and provide reduced electrical and thermal resistances for improved heat dissipation.
This seminar will review the SCOW operating principles and describe applications of the SCOW technology, including watt-class semiconductor SCOW lasers (SCOWLs) and amplifiers (SCOWAs), monolithic and ring-cavity mode-locked lasers, single-frequency external cavity lasers, and high-current waveguide photodiodes. The SCOW concept has been demonstrated in a variety of material systems at wavelengths including 915, 960–980, 1060, 1300, 1550, 1650, and 2100 nm. In addition to single emitters, higher brightness has been obtained by combining arrays of SCOWLs and SCOWAs using wavelength beam-combining and coherent combining techniques. These beam-combined SCOW architectures offer the potential of kilowatt-class, high-efficiency, electrically pumped optical sources.
1 PhD, Electrical Engineering, Massachusetts Institute of Technology
2 BS, Electrical Engineering, Villanova University
3 PhD, Electrical Engineering, Georgia Institute of Technology
4 PhD, Electrical Engineering, University of Illinois at Urbana-Champaign
5 PhD, Electrical Engineering, University of Illinois at Urbana-Champaign
6 PhD, Electrical Engineering, Johns Hopkins University
So, You Want to Build a Quantum Computer? Hardware for a New Computing Paradigm
Dr. Mollie E. Schwartz1
MIT Lincoln Laboratory
Quantum computing, first proposed in the 1980s by physicist Richard Feynman as a method for simulating complicated materials, has grown from a scientific novelty into a multibillion-dollar industry. Quantum computing has promised to revolutionize high-performance computing applications like cryptography, chemistry, machine learning, and more. But what is a quantum computer? What gives it its potentially groundbreaking computing power? And how can we build one?
This talk will provide an introduction to quantum computing and will describe the superconducting modality for realizing quantum hardware. In this hardware approach, high-quality metallic thin films are printed on silicon substrates using micro- and nano-fabrication techniques inspired by microprocessor and integrated circuit fabrication. When cooled within ten thousandths of a degree of absolute zero, these circuits behave like artificial atoms that can be controlled by externally applied microwaves and low-frequency currents. Lincoln Laboratory has been active in developing this technology since its inception, and we are leading the state of the art in technologies needed to build and control extensible superconducting quantum systems. We will describe the advantages and limitations of the superconducting approach to quantum hardware, outline the state of the art in superconducting qubits, and highlight the progress made at Lincoln Laboratory toward realizing this architecture for quantum computing.
1 PhD, Physics, University of California, Berkeley
Space Systems Technology
Technical seminars offered in space systems technology.
Computational Reconfigurable Imaging Spectrometer (CRISP)
Dr. Charles M. Wynn1
MIT Lincoln Laboratory
Hyperspectral imaging (HSI) systems collect and process information at each pixel in the imager from across a wide range of the electromagnetic spectrum, in contrast to standard red/green/blue imagers that only use three frequency bands (colors). This spectral information is useful in numerous applications, including environmental monitoring (e.g., weather sensing imagery), biomedical imaging, surveillance of security threats (e.g., chemical warfare releases, nuclear production emissions, drug manufacturing emissions, and explosive components), food safety inspection and control, and agricultural/mineralogy monitoring. The longwave infrared (LWIR) frequency band is a particularly useful spectral region; however, current LWIR HSI systems are large and expensive because of their use of cooled imagers.
CRISP (Computational Reconfigurable Imaging Spectrometer) is a novel, practical concept that demonstrates a high-sensitivity and low-size, -weight, and -power uncooled LWIR HSI system suitable for both spaceborne and airborne platforms. CRISP utilizes a unique optical design in conjunction with a computational imaging approach, which allows for passive multiplexing of the spectral information on each pixel, in contrast to other multiplex imaging concepts which require active components. Multiplexing improves the system sensitivity, and doing this multiplexing passively maintains high performance at reduced complexity; improved robustness; and reduced size, weight, and power, which are all critical considerations for airborne or spaceborne systems.
This talk will describe the CRISP architecture and design, show lab and flight test results from the prototype CRISP system, and discuss ongoing technology developments focused on space-based Earth climate science applications.
1 PhD, Physics, Clark University
Haystack Ultrawideband Satellite Imaging Radar Antenna
Dr. Joseph M. Usoff1
MIT Lincoln Laboratory
The Haystack facility in Westford, Massachusetts, has been in operation since 1964 and has conducted a wide range of communications, radar, and radio astronomy missions. Lincoln Laboratory, under sponsorship of the United States Air Force, recently upgraded the facility, including the replacement of the 120-foot diameter Cassegrain antenna with a new antenna and the addition of a wideband W-band radar to the existing wideband X-band radar. The upgraded antenna is of the same diameter and has the same optical parameters as the old antenna, but the surface tolerance has been significantly improved to be better than 100 µm rms. The improved antenna surface tolerance permits efficient operation at higher frequencies, enabling W-band radar operations and radio astronomy experiments up to the 230 GHz band. This presentation will provide an overview of the new antenna, describe the fabrication and construction challenges, and highlight the surface alignment process.
1 PhD, Electrical Engineering, The Ohio State University
Overview of the NASA TROPICS CubeSat Constellation Mission
Dr. William J. Blackwell1
MIT Lincoln Laboratory
Recent technological advances in miniature microwave radiometers that can be hosted on very small satellites have made possible a new class of constellation missions that provide very high revisit rates of tropical cyclones and other severe weather. The Time-Resolved Observations of Precipitation structure and storm Intensity with a Constellation of Smallsats (TROPICS) mission was selected by NASA as part of the Earth Venture–Instrument (EVI-3) program and is now in development with launches planned in early 2022. The overarching goal for TROPICS is to provide nearly all-weather observations of 3D temperature and humidity, as well as cloud ice and precipitation horizontal structure, at high temporal resolution to conduct high-value science investigations of tropical cyclones, including:
- Relationships of rapidly evolving precipitation and upper cloud structures to upper-level warm-core intensity and associated storm intensity changes
- Evolution (including diurnal variability) of precipitation structure and storm intensification in relationship to environmental humidity fields
- The impact of rapid-update observations on numerical and statistical intensity forecasts of tropical cyclones
TROPICS will provide rapid-refresh microwave measurements (median refresh rate better than 60 minutes for the baseline mission) over the tropics that can be used to observe the thermodynamics of the troposphere and precipitation structure for storm systems at the mesoscale and synoptic scale over the entire storm lifecycle. TROPICS will comprise a constellation of six 3U CubeSats in three low-Earth orbital planes. Each CubeSat will host a high-performance scanning radiometer to provide temperature profiles using seven channels near the 118.75 GHz oxygen absorption line, water vapor profiles using three channels near the 183 GHz water vapor absorption line, imagery in a single channel near 90 GHz for precipitation measurements (when combined with higher resolution water vapor channels), and a single channel at 205 GHz that is more sensitive to precipitation-sized ice particles and low-level moisture.
This observation system offers an unprecedented combination of horizontal and temporal resolution in the microwave spectrum to measure environmental and inner-core conditions for tropical cyclones on a nearly global scale and is a major leap forward in the temporal resolution of several key parameters needed for assimilation into advanced data assimilation systems capable of utilizing rapid-update radiance or retrieval data. This presentation will provide an overview of the mission and an update on its current status, with a focus on recent performance simulations on a range of observables to be provided by the constellation, including temperature, water vapor, rain rate, and tropical cyclone intensity indicators.
1 PhD, Electrical Engineering, Massachusetts Institute of Technology
Space Fence Radar Overview
Melissa Schoenfeld1
MIT Lincoln Laboratory
Satellites provide a wide range of services to billions of customers every day for everything from cellular phones, to television, to weather monitoring, and more. Technological advancements have allowed for increasingly smaller, highly capable satellites to be launched. Additionally, the result of several satellite collisions has increased the quantity and threat of space debris. As the population of satellites and debris proliferates in space, the necessity for space surveillance has become critically important. Traditionally, space surveillance had focused on tracking one satellite at a time to establish an updated position in a large, centrally managed catalog. In today’s new space environment, it is no longer an efficient use of time or resources to approach the space surveillance problem one satellite at a time.
The Space Fence Radar, located on Kwajalein Atoll in the Marshall Islands, is one of the newest radar systems to be designed for today’s space surveillance needs. Space Fence is a phased array radar designed to automatically detect and track satellites in low-Earth orbit with its fan-shaped surveillance fence. With its digital beam-forming technology, Space Fence can track hundreds of simultaneous objects. Space Fence is even capable of supporting some missions in geosynchronous earth orbit at more than 35,000 kilometers from Earth. In the design and testing of the Space Fence radar, modeling and simulation has played a key role in all phases of the program, including testing requirements that depend on simulated environments or unique orbital events. Lincoln Laboratory developed the Performance Assessment Simulator (PAS) to facilitate the development and testing of the Space Fence Radar software. The PAS provides a satellite simulation environment that interfaces with the radar model by providing space scenarios and collecting simulation performance results.
1 BS, Mathematics, Tufts University; MA, Mathematics, Boston University
Synoptic Astronomy with the Space Surveillance Telescope
Dr. Deborah F. Woods1
MIT Lincoln Laboratory
The Space Surveillance Telescope (SST) is a 3.5-meter telescope with a 3-degree-by-2-degree field of view developed by Lincoln Laboratory for the Defense Advanced Research Projects Agency for tracking satellites and Earth-orbiting space debris. The telescope has a three-mirror Mersenne-Schmidt design that obtains a fast focal ratio of F/1.0, enabling deep, wide-area searches with rapid revisit rates. These attributes also have utility for detections in time domain astronomy, in which objects are observed to vary in position or in brightness. The SST acquired asteroid observations in support of the Lincoln Near-Earth Asteroid Research program from 2014–2017. In the course of its observing programs, the SST has acquired a treasure trove of archival image data. While the SST is currently in the process of relocation to North West Australia, analysis with archival data continues.
With the development of a science pipeline for the processing of archival data, the SST is contributing to studies in time domain astronomy. The science goals are to identify specific classes of variable stars and to use the stars' period of variability and color information from other astronomical catalogs to estimate distances to the objects. Object distances enable the 3D reconstruction of compact halo groups around the Milky Way galaxy, which help inform galaxy formation models. While the SST's primary mission is to support space situational awareness, the image data collected during the course of operations have been valuable for astronomical applications in the time domain.
1 PhD, Astronomy, Harvard University