All times are Pacific (California) time. The schedule may be subject to changes to time and location.
You can view this schedule with more information (slides, authors, etc.) also in EasyChair. Click here to access the detailed Schedule page.
MONDAY, JULY 15, 2024
Major companies such as SpaceX are offering opportunities for scientists, researchers, and small businesses to send custom electronics hardware prototypes into orbit. This initiative enables them to prototype and conduct experiments in the unique environment of space. Nevertheless, the associated costs, both direct and indirect, often pose affordability challenges for smaller entities and researchers. In this presentation, we unveil a cutting-edge computing platform designed to offer small companies and research institutions rapid, dependable, and economical access to space. Our platform caters to both customers equipped with their own computing capabilities who seek to expedite and distribute computations through resource rental, as well as those who prefer to concentrate on their core research without the hassle of designing custom boards, accelerators, and software. This platform guarantees that all customers have access to top-tier, state-of-the-art computational nodes, including GPUs, multicore CPUs, FPGAs, and more. These resources can be rented on demand, and tailored to the specific computational and performance needs of each customer.
Distributed computing provides 2 major advantages to space travel and exploration. One, it provides performance that can scale up or down based on computing needs during a mission - thereby a potential for power reduction. Two, it offers fault tolerant capability. Fault tolerant enables a space system to continuously working if encountered erroneous conditions due to radiation and others in its computing or storage device. Coherent capable distributed computing architecture further enables (1) the use of computing nodes across a network (such as Ethernet) whether on-board or remote, (2) the use of different processing system types (heterogenous computing) in a distributed computing cluster. The architecture discussed in this paper also enable the use of remote networked memory/storage system in distributed computing environment. LeWiz Communications and Western Digital previously open sourced an architecture that can support coherent distributed computing. This paper will present its architecture, potential applications and advantages.
Space Time Adaptive Processing (STAP) radar systems require rapid execution of complex matrix multiplications to calculate and apply covariances and adaptive filtering weights, to achieve filtering of ground clutter and to mitigate jamming. Developers of these systems require capabilities to rapidly simulate and prototype their designs. In this paper we discuss the signal processing requirements of a modern STAP radar system, and describe the implementation of a STAP design in multiple phases, with a hardware-in-the-loop simulation using MATLAB®/Simulink® tools from MathWorks® and with a full implementation in the AMD Versal Adaptive SoC, which is available as a radiation-tolerant space-grade device. Our implementation phases make use of the Adaptive Intelligent Engines in the Versal architecture to achieve rapid execution of matrix multiplication with complex values and to allow rapid modification of algorithms, without incurring additional development time due to repeated place-and-route and static timing analysis cycles.
Abstract—This paper characterizes and discusses the software scripts and tools used by the Mars Science Laboratory rover planner team, and describes the effort to unify the management of this tool suite. Rover Planners have historically used numerous command line tools to aid them in designing and implementing activities over the course of a tactical planning shift. The bulk of these tools were developed ad-hoc by various individuals, with little oversight or organization. As a result, the scripts were poorly documented, difficult to locate, and mostly did not have any formal testing associated with them. As part of the modernization process, we identified all of the existing scripts and their source code locations, moved them to a centralized location within the operations venue, and wrote unit tests and integration tests for every tool. Many tools were upgraded from Python 2 to Python 3, or converted from other languages to Python 3 where feasible. We implemented a system for reviewing changes and continuous testing by moving all scripts to a single git repository, where we track and actively maintain them. Pull requests are tested automatically using Jenkins, and the entire suite of scripts and library functions is tested upon every deployment of the suite. We manage feature requests and bug fixes via GitHub issues, and a working group meets biweekly to discuss changes and progress on efforts relating to software-based tools for the MSL rover planners. In this paper, we detail the design of a unified system for managing these command line tools, the implementation of said system, and the innovation and utility of these tools and how they improve the tactical planning process for MSL Rover Planners. We present this framework as an example for other mission operations teams to use to manage standalone command line scripts that make use of common tools and services.
Space systems are continuously under cyber attack. Minimum cybersecurity design requirements are necessary to preserve our access to space. This paper proposes a scalable, extensible method for developing minimum cyber design principles and subsequent requirements for a space system based on any given mission priority. To test our methodology, we selected the fundamental mission priority of preserving access to space by preventing the permanent loss of control of a satellite. We then generate the minimum number of secure-by-design principles that can result in the permanent loss of control of a satellite and translate these into example minimum requirement `shall' statements. Our proposed minimum requirements methodology and example can serve as a starting point for policymakers aiming to establish security requirements for the sector. Further, our methodology for establishing minimum requirements will be engaged for prioritizing the efforts of the emergent IEEE International Technical Standard for Space Cybersecurity (Working Group P3349).
On past and present human space missions, the management of vehicle health and status has primarily been executed from Earth. Missions such as Apollo, Space Shuttle, and ISS have relied on a safety net of ground-based experts with access to real-time telemetry data, broad and deep systems expertise, and powerful analytical and computing capabilities. The ground team monitors and manages the vehicle’s health in real-time and responds quickly to critical situations and malfunctions. Ground operators also provide real-time oversight and verbal guidance to flight crewmembers, especially during complex procedure execution and high-risk activities like extra-vehicular activities.
However, this operational paradigm, in place for 60 years, will not transfer to long duration exploration missions beyond low Earth orbit (LEO). Lunar and deep-space crewed missions will encounter delayed communications that prohibit real-time operational and medical support. Additionally, there will be infrequent resupply and a diminished capacity to evacuate or rescue crewmembers. A small crew must operate independently, managing the vehicle’s state, responding to time-critical events, and executing complex procedures, all without the safety net of real-time support.
A key challenge for a small crew lies in the vast amount of data they must process to support procedure execution and anomaly response. In today’s ISS mission control center, 15-20 flight controllers (working 24 hrs/day in three shifts) continuously monitor real-time data for their respective subsystems, supplemented by Back Room and Mission Evaluation Room (MER) engineers. They work to detect failures, assess impacts, troubleshoot, identify workarounds, and oversee procedure execution—all of which require access to and understanding of extensive engineering and procedure information, as well as system build, test, and configuration documentation.
As problem solving transitions onboard for missions beyond LEO, the crew needs more than mere access to this wealth of information. It must be compiled, refined, and presented appropriately to support a small crew with far less time and expertise. This challenge is aggravated by the relatively underpowered computing capabilities available to crews in space which are designed to endure radiation and other environmental hazards. Consequently, the capability of these systems may lag behind their terrestrial counterparts by years or even decades. Additionally, crews in space contend with significantly less display real estate compared to ground operators. While ground operators can utilize multiple large displays, crew must manage with resources more akin to a single laptop display.
Our team’s investigation into past anomalies on Apollo and ISS missions unveiled key characteristics that make unanticipated, time-critical anomalies so challenging to resolve, including imperfect sensor data, complex causal relationships, and limited intervention options. Beyond LEO, onboard systems need capabilities that will support the crew in creative and critical problem-solving to overcome some of those challenges. This paper extends our past work, identifying the core crew interface characteristics needed to support onboard time-constrained problem solving and decision making under conditions of delayed communications with the ground team. Drawing insights from analogous domains, including healthcare and nuclear power, we present preliminary recommendations for organizing and integrating information for effective problem solving. A case study of an actual ISS anomaly resolution will be used to envision the onboard information and decision support systems for Earth independent problem solving.
The real time control of many-actuator adaptive optics systems will allow future space telescopes to suppress starlight and directly image and characterize exoplanets. In the future, a measurement by this technique may be the first to directly detect extraterrestrial life in the universe. However, the real-time execution of adaptive control algorithms will place unprecedented demands on spaceborne processors. Previous work has estimated the necessary level of computational system performance based on computational density analysis. In this work, we first evaluate the relevant algorithms in numerical detail, and decompose the top-level computational system into subsystems. We then perform requirements flow-down to these subsystems to evaluate the expected performance of a range of candidate processors. We additionally consider radiation degradation of the control processor within the context of a high contrast imaging mission. With this system decomposition and requirements flow-down, we survey relevant space processors for their expected performance on wavefront sensing and control algorithms. This analysis supports the need for further development of high performance radiation tolerant processors.
The motion of vehicles is influenced by controllers designed in Guidance, Navigation, and Control (GNC). The classic tradeoff in GNC algorithms is between performance and actuation, but any quantifiable variable can be part of the penalties that influence the governance of the system. The purpose of this research is to characterize computational resources so that they can be used by GNC controllers in situations where limited computational hardware, such as a Raspberry Pi Model B+ Rev 1.2, is combined with computationally expensive algorithms, such as Model Predictive Control (MPC). In application, lack of funding or external factors can cause available computing resources to be limited. In space applications, processors have to be radiation hardened to become fault tolerant of the ionized space environment they need to operate in. This radiation hardening impairs the availability of computational resources, such that our everyday cell phones can have orders of magnitude larger computational resources. Several computational metrics such as central processing unit (CPU) utilization, active power consumption, and physical memory usage are of primary interest in the research. This research utilizes a test bench of different hardware (HW), where algorithms are loaded and executed “on-board” and the computational resources measured during execution. These signals are then analyzed using the principles of system identification: measurement of the signal, selecting a model structure, estimating adjustable parameters in the model structure, and evaluating the estimated model’s predictive performance. The objective computational resource predictive model is a spring-mass-damper. This predictive model will then be incorporated into control synthesis and the resulting system dynamics adjusted based on penalties incurred by the computational resource status. The algorithms are compiled in C++. The MPC algorithm has tunable convergence parameters such as maximum number of iterations and numerical tolerance. These values are adjusted to create different signal dynamics in the computational resource metrics for analysis. In one benchmark exercise, the executable is called in a loop with static values to repeatedly provide an “impulse” to the computational metrics. Active power consumption is measured with an ONSET Hobo Plug Load Logger UX120-018 and computational resources collected through linux terminal monitor commands and output redirected to file. Follow-on work will evaluate GNC algorithms with representative spacecraft values and create a control structure called Real-Time Recursive Optimization (R2O) that adjusts the GNC algorithms given the availability of computational resources. Alternative computational resource collection methods from inside the user-issued process are underway.
Autonomous space missions are dependent on their onboard computing capability to carry out their prescribed tasks. Ideal flight hardware systems must be capable of performing mission-critical tasks in real time while satisfying size, weight, power, and cost (SWaP-C) constraints of the mission. Additionally, space hardware must also be robust to the unique hazards of space environments. To expand the onboard computing capability for future space missions, new hardware platforms must be evaluated to determine their computational performance, power usage, and radiation sensitivity. One such hardware platform is the Gemini Associative Processing Unit (APU), which features a unique processing-in-memory (PIM) architecture to limit memory transfer operations. With this architecture, computations can be performed directly in memory, eliminating the need for excess data transfers which can negatively affect performance. This study conducts a performance comparison of Gemini APU devices with both modern desktop hardware as well as current generation flight hardware. Based on the results collected in this research, Gemini APU devices can provide much higher performance per watt than modern terrestrial CPUs, and offer uniquely scalable performance at a power profile similar to modern flight hardware.
Large space systems such as space telescopes capture high resolution images continuously. These are processed with on-board DSP, then packetized for continuous transmission via network such as TCP/IP and Ethernet to remote processing nodes. Such transmission tend to be bursty and can reach 100Gbps speed making software based solution unfeasible. Space-capable FPGA devices are used to implement these functions in hardware. Space systems, however, are constrained by cost, available FPGA resources and power. This paper presents (1) a scalable, multi-channel architecture for hardware implementation of such data capture, transmission, analysis chain, (2) an interface mechanism for transmission of data coherently for remote system to receive and analyze using commercial tools, (3) a framework for building and validating such high performance system using commercially available computing systems. Lastly, it discusses other potential applications of such architecture in Earth orbit systems requiring data capture and analysis.
This paper provides a comprehensive overview of the µD3TN project's development, detailing its transformation into a flexible and modular software implementation of the Delay-/Disruption-Tolerant Networking (DTN) Bundle Protocol. Originating from µPCN, designed for microcontrollers, µD3TN has undergone significant architectural refinement to increase flexibility, compatibility, and performance across various DTN applications. Key developments include achieving platform independence, supporting multiple Bundle Protocol versions concurrently, introducing abstract Convergence Layer Adapter (CLA) interfaces, and developing the so called Application Agent Protocol (AAP) for interaction with the application layer. Additional enhancements, informed by field tests, include Bundle-in-Bundle Encapsulation and exploring a port to the Rust programming language, indicating the project's ongoing adaptation to practical needs. The paper also introduces the Generic Bundle Forwarding Interface and AAPv2, showcasing the latest innovations in the project. Moreover, it provides a comparison of µD3TN's architecture with the Interplanetary Overlay Network (ION) protocol stack, highlighting some general architectural principles at the foundation of DTN protocol implementations.
This paper addresses critical improvements in the Schedule-Aware Bundle Routing (SABR) standard, pivotal for distributed space missions based on Delay-Tolerant Networking (DTN). With a focus on volume management, defined as efficiently allocating and utilizing the data transmission capacity of network contacts, we explore enhancements for distributed and scheduled DTNs. Our analysis begins by identifying and scrutinizing existing gaps in volume management within the SABR framework. We then introduce a novel concept coined contact segmentation, which streamlines the management of the transmission volumes. Our approach spans all network contacts, initial and subsequent, by unifying previously separate methods such as Effective Volume Limit (EVL), Earliest Transmission Opportunity (ETO), and Queue-Delay (QD) into a single process. Lastly, we propose a refined generic interface for volume management in SABR, enhancing the system’s maintainability and flexibility. These advancements rectify current limitations in volume management and lay a foundation for more resilient and adaptable space DTN operations in the future.
This document surveys existing terrestrial network security practices, focusing on X.509 public key infrastructure (PKIX), and identifies ways that the existing systems and protocols can be used in a delay-tolerant networking (DTN) environment. Additional discussion of protocols currently under development shows how PKIX security can be used directly and efficiently within a DTN. These are combined into one possible vision for distributed and autonomous security within the NASA LunaNet architecture.
The Bundle Protocol (BP) was designed to address the challenges inherent in space communications. While already in use in several projects led by various space agencies, including the European Space Agency (ESA) and the National Aeronautics and Space Administration (NASA), there is a need to expand BP’s capabilities, including in Quality of Service (QoS) support, an area currently lacking standardization. This document proposes a dual QoS support block for BP which facilitates the definition of QoS requirements at the source in an immutable manner while allowing dynamic adjustments by networks or subnetworks. Furthermore, preliminary results are presented, analyzing the effects of the proposed traffic prioritization system and the weighted queue management. These results show improved end-to-end delay for time-sensitive information, and a higher rate of achieved QoS requirements for all priority classes, as well as a fairer approach to network scheduling.
A datacenter for use in Space offers high performance computing similar to small terrestrial datacenters, including storage, networking, and cloud computing. When operated using software platforms from commercial Cloud Service Providers, the range of enabled applications is quite similar to terrestrial clouds. The Space datacenter is planned for reliable power-efficient operation for longer than 30 years in any Space environment, employing on-board AI-based FDIR and self-healing. A small version is based on a single 6U-VPX enclosure. The large version is packaged in less than one cubic meter and offers about 13 TFLOP, 100 AI/ML TOPS and 3.5 PB storage while dissipating 12 kWatt.
Sensing and onboard-processing capabilities of next-generation spacecraft continue to evolve. Enabled by advances in avionic systems, large amounts of data can be collected and stored on orbit. Nevertheless, loss of signal, communication delays, and limited downlink rates remain a bottleneck for delivering data to ground stations or between satellites. This research investigates a multistage image-processing pipeline and demonstrates rapid collection, detection, and transmission of data using the Space Test Program - Houston 7 - Configurable and Autonomous Sensor Processing Research experiment aboard the International Space Station, as a case study. Machine-learning (ML) models are leveraged to perform intelligent processing and compression of data prior to downlink to maximize available bandwidth. Furthermore, to ensure accuracy and preserve data integrity, a fault-tolerant ML framework is employed to increase pipeline reliability. This pipeline fuses the fault-tolerant Resilient Tensorflow framework with ML-based tile classification and the CNNJPEG compression algorithm. This research shows that the imaging pipeline is able to alleviate the impact of limited communication bandwidth by using reliable, autonomous data processing and compression techniques to achieve reduced transfer sizes of essential data. The results highlight the benefits provided by resilient classification and compression including minimized storage use and reduced downlink time. The findings of this research are used to assess the feasibility of such a system for future space missions. The combination of these approaches enables the system to achieve up to an 98.67% reduction in data size and downlink time as well as the capacity to capture imagery over a 75.19x longer time period for a given storage size, respectively, while maintaining reconstruction quality and data integrity.
In space applications, the demand for high-performance computing systems capable of withstanding extreme conditions is paramount. Those systems built around processors and/or FPGAs require highly reliable and high-performance components. Boot/configuration memory and processing memory are essential to the mission success.
Robotic test facilities for dexterous mobile robotic manipulation will be crucial for proving the viability of advanced terrestrial technologies for space applications. A need has been expressed by many commercial, academic, and international partners for reference tasks and mockups of items and interfaces relevant to Moon to Mars exploration use cases. IMETRO is a new robotics test facility at NASA Johnson Space Center in Houston which will help to meet this need for both physical and digital twin robotics testbeds for Artemis Campaign Use Cases. IMETRO Features: -COTS mobile robots: provided by facility to allow partners to test software, sensors, tools, end effectors, etc. -Medium to high fidelity mockups and interface testbeds relevant to multiple program stakeholders: reduce duplication and encourage standardization among partners -Remote operations - simulate supervised autonomy of robots in space from earth -Digital-twin open-source robotic simulations for early partner s/w development and testing: provide digital models of the physical testbeds and COTS robots in the facility
TUESDAY, JULY 16, 2024
Introduction by the Workshop Organizers
Writing flight software is already difficult, so why do we spend so much time-fighting dependency issues, setting up custom-built scripts, and tracking down obscure compatibility issues with dependencies? A vast amount of engineering time and energy is spent on these banal tasks. Two new tools promise a way out of this morass: Rust and Nix. Nix is something of an enigma, combining a package manager, build system, programming language, and Linux distros into one powerful tool. Rust, famous for its memory safety, likely needs no introduction. Beyond memory safety, Rust has a robust package ecosystem that focuses on interoperability. Rust packages like Serde, postcard, and embedded-hal make it easy to develop production-quality software quickly. Using Nix, new team members can set up their development environments in seconds. Cross-compiling and building full OS images is trivial with Nix. Together, Nix and Rust allow for rapid iteration and a streamlined developer experience, free from the pains of other tooling. This talk will outline how these techniques have been used in two upcoming missions: one in LEO and one in deep space.
The High-Performance Spaceflight Computing (HPSC) processor is a game-changer space compute solution that addresses computational performance, energy management and fault tolerance needs of NASA missions through 2040 and beyond. This presentation aims to provide a succinct overview of the program to the general public, outlining its key deliverables and the significant impact it is poised to have on the future of space computing and autonomous space missions. Attendees are cordially invited to participate in the forthcoming HPSC workshop for an in-depth exploration of the program's details.
This presentation will also provide a brief overview of the architecture and capabilities of the HPSC processor. Additionally, it will highlight the expanding ecosystem associated with the device.
For many space applications, traditional control methods are often used during operation. However, as the number of space assets continues to grow, autonomous operation can enable rapid development of control methods for different space related tasks. One method of developing autonomous control is Reinforcement Learning (RL), which has become increasingly popular after demonstrating promising performance and success across many complex tasks. While it is common for RL agents to learn bounded continuous control values, this may not be realistic or practical for many space tasks that traditionally prefer an on/off approach for control. This paper analyzes using discrete action spaces, where the agent must choose from a predefined list of actions. The experiments explore how the number of choices provided to the agents affects their measured performance during and after training. This analysis is conducted for an inspection task, where the agent must circumnavigate an object to inspect points on its surface, and a docking task, where the agent must move into proximity of another spacecraft and "dock" with a low relative speed. A common objective of both tasks, and most space tasks in general, is to minimize fuel usage, which motivates the agent to regularly choose an action that uses no fuel. Our results show that a limited number of discrete choices leads to optimal performance for the inspection task, while continuous control leads to optimal performance for the docking task.
Modern spacecraft are increasingly relying on machine learning (ML). However, physical equipment in space is subject to various natural hazards, such as radiation, which may inhibit the correct operation of computing devices. Despite plenty of evidence showing the damage that naturally-induced faults can cause to ML-related hardware, we observe that the effects of radiation on ML models for space applications are not well-studied. This is a problem: without understanding how ML models are affected by these natural phenomena, it is uncertain “where to start from” to develop radiation-tolerant ML software.
As ML researchers, we attempt to tackle this dilemma. By partnering up with space-industry practitioners specialized in ML, we perform a reflective analysis of the state of the art. We provide factual evidence that prior work did not thoroughly examine the impact of natural hazards on ML models meant for spacecraft. Then, through a “negative result,” we show that some existing open-source technologies can hardly be used by researchers to study the effects of radiation for some applications of ML in satellites. As a constructive step forward, we perform simple experiments showcasing how to leverage current frameworks to assess the robustness of practical ML models for cloud detection against radiation-induced faults. Our evaluation reveals that not all faults are as devastating as claimed by some prior work. By publicly releasing our resources, we provide a foothold—usable by researchers without access to spacecraft—for spearheading development of space-tolerant ML models.
This work explores how to leverage the Rust programming language for space applications and remote system applications in general. It introduces a novel framework named sat-rs with the goal to simplify the work of engineers writing on-board software for remote systems using Rust. A holistic approach is taken, covering the exploration of the existing ecosystem, the integration with ground systems, and the utilization of Rust’s distinctive language features to minimize the effort needed to create on-board software for remote systems.
We examine ways to enhance cybersecurity in spacecraft operations by analyzing and reducing the attack surface of flight software. We advocate for reducing complexity in the software archtecture and adopting more secure architectural principles to mitigate vulnerabilities and make spacecraft more resilient against cyber attacks. Utilizing a systematic approach, we focus on the real-time operating system (RTOS) and operating system abstraction layer (OSAL) as key areas of scrutiny and development of mitigations. This study's findings suggest strategies for simplifying abstractions to make them more secure, addressing implementation issues, and providing supporting evidence for moving to a more resilient architectural approach.
Understanding the Interface Control Drawing (ICD) boundaries is essential for designing digital data links. Defining the framework of the digital bit stream and clear implementation of the digital signal constructs are two distinct functions that must be accomplished. This work compares LVDS with CML signal levels in the context of interfacing the SREDES outputs with inputs to photonic engines. Capable of delivering low jitter performance to link elements of differing distances and different data rates. Consideration is given to physical realization of 4 port and 16 port ethernet links supporting real time sensitive data processing requirements..
Communications in space have been implemented as point to point, where a network has not been used or deployed. Compared to Internet, communications in deep space have very long delays, up to 40 minutes round trip to Mars for example, and intermittency of minutes to hours to days. 20 years ago, the Internet Protocol (IP) suite was identified as not suitable for space networking[RFC4838], so a complete new protocol stack based on the Bundle Protocol (BP)[RFC9171] has been designed. BP requires completely new and tailored routing, naming, security, API, applications and a complete new way to write applications. Since then, the IP protocol suite has evolved in various dimensions, such as for IoT, mobile and intermittent communications, with new protocols such as the QUIC and COAP. An initiative to reassess the use of the IP protocol suite in deep space is underway, where the whole stack, from IP to routing, to security to naming to transport to network management to applications is profiled for the deep space use. Reusing current IP-based protocols enables, for example, the use of HTTP REST APIs over deep space links, network management protocols such as Netconf and Yang, naming using DNS, etc… Therefore all the code and frameworks available today can be reused. However, these protocols and code usually have assumptions about network characteristics of the current well connected and fast Internet, which are invalid in deep space. This DeepSpaceIP initiative identifies these assumptions and define profiles for the protocols and applications to be usable in deep space. A testbed with simulated deep space communications characteristics is used to verify the applicability of these profiles. This presentation describes the rationale, proposal, architecture, profiles, most recent results and guidance to space application developers on how to use IP in deep space.
Innoflight's Mission Processing Electronics (MPE) and Mission Networking Electronics (MNE) modular 3U VPX architectures offer unprecedented high-performance on-board processing (GPP, GPU, and FPGA), storage (up to 2 TB), networking (Ethernet switch and IP/MPLS router) and Input/Output (I/O) capabilities. These 2-slot (MPE-400 series) and 4-slot (MPE-600 series) VPX chassis modular solutions are ideal for payload/edge/AI processing, including Battle Management Command, Control & Communications (BMC3), mission data processing for advanced space sensors (IR, SAR/RF and hyperspectral), and networking applications, to name a few. Innoflight is already producing these products in large volumes, driven by the needs of the Space Development Agency (SDA) Proliferated Warfighter Space Architecture (PWSA) pLEO tranches and other missions.
SpaceFibre is a data link and network technology developed specifically for spacecraft on-board data-handling. It runs over electrical or fibre-optic cables, operates at very high data rates, and provides in-built quality of service, and fault detection, isolation and recovery capabilities. Because of these important characteristics, SpaceFibre is already flying in several spacecraft and being designed into over 60 more.
The key features of SpaceFibre are listed below: • Very high-performance, e.g. 25 Gbit/s with a quad-lane link with each lane at 6.25 Gbit/s. • Operates over electrical and fibreoptic media. • High reliability and high availability using error-handling technology which is able to recover automatically from transient errors in a few microseconds without loss of information. • Multi-lane capability providing increased bandwidth, rapid (few μs) graceful degradation in the event of a lane failure, hot and cold lane redundancy, and support for asymmetric traffic. • Quality of service using multiple virtual channels across a data link, each of which is provided with a priority level, a bandwidth allocation and a schedule. • Virtual networks that provide multiple independent traffic flows on a single physical network, which, when mapped to a virtual channel, acquire the quality of service of that virtual channel. • Deterministic data delivery of information using the scheduled quality of service, in conjunction with priority and bandwidth allocation. • Low-latency broadcast messages which provide time-distribution, synchronisation, event signalling, error reporting and network control capabilities. • Small footprint which enables a complete SpaceFibre interface to be implemented in a radiation tolerant FPGA; for example, around 3% of an RTG4 FPGA for a typical instrument interface with two virtual channels. • Backwards compatibility with SpaceWire at the network level, which allows simple interconnection of existing SpaceWire equipment to a SpaceFibre link or network. • SpaceFibre is a data and control plane technology in the revised VITA 78 standard (SpaceVPX-2022) and a data-plane technology in the ADHA standard.
For instruments which have a modest data rate e.g. 200 Mbit/s, SpaceWire may seem to be the obvious choice for collecting the data from them, but the capabilities of SpaceFibre make it very attractive for interfacing to moderate (100 Mbit/s) data-rate instruments as well as those with high (1 Gbit/s), very high (10 Gbit/s) and extremely high data-rates (>>10 GBit/s).
This paper introduces SpaceFibre and then describe the WBS-VIII, a high-performance FFT-based spectrometer instrument processor designed for spaceflight applications which has modest output data-rates. It then explains why SpaceFibre was used as its data and control interface. Some of the facilities inherent in SpaceFibre, beyond the raw performance, are used to significant advantage. Particular attention is given to the SpaceFibre broadcast message capability and how that was able to simplify the software in the instrument control unit triggering and controlling the WBS-VIII, while also reducing the cable harness mass.
SpaceFibre is an open standard (ECSS-E-ST-50-11C, 2019) for high-performance, high-availability payload data-handling network technology for space applications. It is currently flying on at least six spacecraft and being designed is to around sixty more.SpaceFibre operates over electrical or fibre optic media and is backwards compatible with SpaceWire (ECSS-E-ST-50-12C) at the packet level. SpaceFibre provides high data-rates, building on the capabilities of Multi-Gigabit Transceivers (MGT) available in current FPGAs and ASIC. When the data-rate of a single-lane is insufficient several lanes can be used to form a multi-lane link. For example, a quad-lane link with a lane raw data-rate of 7.5 Gbit/s will provide a link raw data-rate of 30 Gbit/s. SpaceFibre provides high availability by recovering from transient errors rapidly (~3 µs), without loss of data and close to where the fault occurred, avoiding fault propagation. In a multi-lane link, should one lane fail, the link automatically reconfigures (taking ~2 µs once the fault has been detected) and continues to operate with the remaining lanes. Once again, this is done without loss of data. Hot or cold redundant lanes can be added to replace a faulty lane. SpaceFibre’s quality of service, which supports several virtual channels, each with priority, reserved bandwidth and schedule. If a lane in a multi-lane link fails, the quality of service settings determine which virtual channels are able to send data and which are held up due to the reduced bandwidth. These dynamic and fast error and fault recovery capabilities provide the high-availability of a SpaceFibre link. SpaceFibre has a small footprint and is straightforward to manage.SpaceFibre was developed by STAR-Dundee and University of Dundee, with inputs from international engineers, and funded by STAR-Dundee, European Union, ESA and UKSA.
Hello, This is a Poster Abstract submission for the New Ideas and Emerging Results Workshop.
TITLE: Standards Based Next Generation Avionics Centered on HPSC Single Board Computer
AUTHOR: Moog Broad Reach (PoC Gates West, email: gwest@moog.com)
INTRODUCTION: Space avionics have leveraged commercial standards for decades. From VME in the 90’s and Compact PCI in the 2000’s, the next evolution of standardized Hi-Rel electronics is VPX. A standards committee is currently working on a variant of VPX specifically architected for space avionics applications. At the core of the avionics is the flight computer, typically a single board computer. An HPSC chip-based SBC, coupled with the emerging VPX standard for space, will enable new capabilities such as on-orbit autonomous decision making and AI/ML applications.
The intent is that this poster would touch on the following Topics of Interest:
HPSC SBC: The HPSC processor will offer state of the art capability in terms of processing power, I/O connectivity, secure boot and operation, and radiation tolerance. The SBC built around the HPSC chip will conform to the emerging space VPX standard and supporting architecture.
VPX SPACE STANDARD: Main features and benefits of the standard will be highlighted, including built-in redundant capabilities and chassis management concepts.
AVIONICS CHASSIS: A notional avionics chassis, consisting of several standards based Plug-In Cards (PICs) and power supplies will be considered. Additionally, single string and redundant architectures will be explored.
APPLICATIONS: Finally, high level applications such as spacecraft bus avionics and payload processing units will be discussed. The concept of a “software defined avionics” will also be touched on. The combination of a HPSC SBC and emerging space VPX standard will enable new and exciting capabilities for future projects.
Interoperability and scalability of robotic manipulators will be key to develop and sustain a lunar surface and cislunar ecosystem. From in-space servicing, assembly, and manufacturing (ISAM) to logistics, maintenance, and science operations, robotic manipulation is a critical NASA capability need and the demand for high-performance spaceflight computing will only rise as robotic tasks become more autonomous. With increased complexity, testbeds for research, feasibility studies, and technology demonstrations will be essential. The Dexterous Robotics Team at NASA Johnson Space Center has established multiple robotic manipulation testbeds taking a supervised autonomous remote operations approach and plans to infuse HPSC to emulate the flight environment and close the gap between space technology development and flight operations.
In the domain of space communications, particularly in regions beyond cislunar space, the development of advanced networking solutions is essential to address the challenges posed by limited connectivity, substantial propagation delays, and radio signal variations. This study explores a data-driven intelligence approach to the Licklider Transmission Protocol (LTP), specifically focusing on dynamically adjusting the maximum payload size of segments. Prior research has emphasized the potential benefits of dynamically adjusting this parameter, introducing the concept of Cognitive LTP. This paper presents the software implementation of Cognitive LTP (CLTP) within an open-source Delay Tolerant Networking (DTN) framework, specifically the High-rate Delay Tolerant Networking (HDTN), and evaluates its performance under realistic space conditions. Leveraging the Cognitive Ground Testbed (CGT), developed by NASA GRC for spacecraft communication emulation, this study effectively bridges the gap between theoretical advancements and practical applications. By thoroughly analyzing CLTP's functionality within the CGT, this research offers insights into the practical implications of adaptive networking strategies, emphasizing the importance of conducting tests in relevant environments for the maturation of space communication technologies.
The Interplanetary Network (IPN) emerges as the backbone for communication between various spacecraft and satellites orbiting distant celestial bodies. This paper introduces the Interplanetary Network Visualizer (IPN-V), a software platform that integrates interplanetary communications planning support, education, and outreach. IPN-V bridges the gap between the complexities of astrodynamics and network engineering by enabling the generation and assessment of dynamic, realistic network topologies that encapsulate the inherent challenges of space communication, such as time-evolving latencies and planetary occlusions. Leveraging the power of Unity 3D and C#, IPN-V provides a user-friendly 3D interface for the interactive visualization of interplanetary networks, incorporating contact tracing models to represent line-of-sight communication constraints accurately. IPN-V supports importing and exporting contact plans compatible with established space communication standards, including NASA’s ION and HDTN formats. This paper delineates the conception, architecture, and operational framework of IPN-V while evaluating its performance metrics.
This paper explores the integration of Delay-Tolerant Networking (DTN) and Contact Graph Routing (CGR) within Direct-to-Satellite Internet of Things (DtS-IoT) networks, utilizing the FLoRaSat discrete-event simulator based on Omnet++. By incorporating a DTN model and the CGR algorithm, the study evaluates the efficacy of these technologies in optimizing data routing and handling across emerging Low-Earth Orbit (LEO) satellite networks. The research delves into various satellite fleet configurations, including Star and Delta constellations, across different numbers of orbital planes and with the integration of opportunistic Inter-Satellite Links (ISLs). Results demonstrate that the DTN store-carry-and-forward approach, enhanced by CGR, significantly reduces end-to-end delivery delays. Specifically, the implementation achieves an average end-to-end delivery delay as low as 10 minutes in 4-plane Star constellations with 24 satellites and immediate forwarding in 8-plane Delta constellations of equivalent size, underscoring the potential of DTN and CGR to improve the efficiency and reliability of emerging DtS-IoT.
F Prime is a free, open-source and flight-proven flight software development ecosystem developed at the NASA Jet Propulsion Laboratory that is tailored for small-scale systems such as CubeSats, SmallSats, and instruments. F Prime comprises several elements: (1) an architectural approach that decomposes flight software into discrete components with well-defined interfaces that communicate over ports; (2) a C++ framework providing core capabilities such as message queues and an OS abstraction layer; (3) a growing collection of generic components for basic features such as command dispatch, event logging, and memory management that can be incorporated without modification into new flight software projects; and (4) a suite of tools that streamline key phases of flight software development from design through integrated testing.
Advance enrollment is requested to confirm a seat at the tutorial. If you are interested in participating or have any questions, please email fprime@jpl.nasa.gov.
Demand for orbital image data is increasing at a pace much faster than down-link capacity to move this data to ground stations for processing. Space Edge Computing for Deep Learning (DL) based analysis of orbital image data (categorization/change detection) offers a promising solution. Dynamic deployment of DL models to space edge computing devices is desirable but significantly constrained by uplink bottlenecks, hardware limitations and power budgets. This paper proposes a selection methodology for dynamic deployment of DL models for an on-orbit context. Making use of the Once-for-all (OFA) framework, our proposed solution considers required Machine Learning (ML) accuracy performance, upload availability and hardware limitations for time-critical, earth observation scenarios. Groundstation aware orbital simulations are performed to determine maximum transmission size for a given time window to determines the maximum network size. This combined with space edge computing hardware limitations are used as input for a suitable OFA sub-network. In many scenarios tested this methodology resulted in model transmission in 1 less orbital period for a small decrease in top-1 accuracy.
Teledyne e2v has a strong portfolio of Space Data Processing Solutions extensively qualified and characterized against radiation to address Edge Computing systems.In this presentation, Teledyne e2v will present the key features its Space ARM® based Multi-core Processors, Processing modules and High speed DDR4 memories that are qualified for Space.
In the realm of space weather forecasting, the emergence of our proposed Helios-LSTM algorithm signifies a ground breaking leap towards precision in predicting solar wind activity. With a paramount focus on the urgent requirement for accurate forecasts, this paper introduces cutting-edge deep learning model that not only monitors solar wind patterns but achieves an unprecedented 94% accuracy rate. Our proposed research stems from a meticulous integration of data from NASA’s Solar Wind, Solar Radiation (CME), and Geomagnetic Storm APIs, culminating in a robust dataset designed for training our pro- posed novice model. Our methodology encompasses sophisticated data preprocessing techniques, leveraging hourly features from solar wind data and employing imputation strategies for missing values. The core of the model architecture includes a Bidirectional LSTM layer to capture nuanced temporal dependencies, three dense layers for comprehensive feature transformation, and a GRU layer to further enhance the analysis of solar wind activity. Trained on 29 features, our Helios-LSTM algorithm not only outperforms existing methods but also demonstrates its prowess in predicting solar wind patterns over varying time intervals from the last two hours to the last seven days. The significance of our research extends beyond the realm of solar wind forecasting, as solar wind interactions with Earth’s magnetic field can trigger geomagnetic storms, presenting imminent risks to critical infrastructure. By forecasting the Disturbance Storm-Time Index (Dst), our novice model utilizes data from NASA’s ACE and NOAA’s DSCOVR satellites to unravel the complex relationships between interplanetary magnetic fields, solar wind plasma, and sunspot activity. Evaluation metrics such as root mean square error & coefficient of determination substantiate our proposed novel Helios-LSTM model’s efficacy in predicting geomagnetic storms. The outcomes not only offer invaluable insights for satellite operators, power grid managers, and navigation systems but also lay the foundation for a predictive model that safeguards Earth against the disruptive impacts of geomagnetic storms. Our research heralds a new era in space weather forecasting, providing decision-makers with a robust and timely tool to fortify essential systems and brace for geomagnetic disturbances
Chiplet has the potential to increase reliability of electronic architecture, enable scalability and quick reuse. But they also come with lots of design challenges with topology, placements, power consumption, thermal impact and latency. We will discuss the application of chiplets, the potential to accelerate space deployment and how Mirabilis Design is empowering systems engineers and architects to deploy faster.
Project AV (MAVIS) is an interdisciplinary university project undertaken by the SEDS-UPRM chapter, focused on designing and manufacturing a semi-autonomous Mars rover to compete in the University Rover Challenge (URC). As the first team from the University of Puerto Rico to develop a semi-autonomous rover and robotic arm, MAVIS represents a milestone in collaborative innovation across diverse disciplines. The MAVIS rover, inspired by the Sherpa Rover, consists of six major sub-assemblies: wheels, steering, suspension, chassis, robotic arm, and science suite. Constructed mostly using additive manufacturing techniques, MAVIS measures 0.80m in height and 1.05m in length and width, weighing 50.43kg with payloads. The chassis, designed with an octahedral shape, integrates the suspension at corners and features optimized floors for efficient electrical component arrangement. Additionally, it includes a front payload bay for interchangeable installation of the robotic arm and science suite. The chassis, primarily made of aluminum sheets, weighs 6.45kg and offers strategic dimensions for rover stability and operational versatility. MAVIS's suspension system, is composed of two aluminum control arms and a stainless-steel spring, ensures stability and maneuverability on diverse terrains, supporting a ground clearance of 0.275m at a 45-degree operational position. The steering system seamlessly integrates into the end of the suspension, enabling both active and passive modes for versatile navigation. For the wheels, the team developed an airless tire, crafted with thermoplastic polyurethane (TPU) and Nylon 6 components, featuring a unique "M" tread design for enhanced traction and obstacle traversal capabilities. An anti-deformation barrier prevents tire damage, crucial for mission success in challenging terrain. One of the two payloads, the robotic arm, boasts five degrees of freedom for high dexterity tasks in Extreme Delivery and Equipment Servicing Missions. The arm is made predominantly from Nylon 6 and Nylon 11 to ensure a design that is both strong and lightweight. The arm's end- effector, driven by linear actuation, ensures precise object manipulation and task execution. ROS and RViz are used to provide the control station with real-time visualization and accurate position for the arm during operation and a GUI was developed to precisely control the arm’s movements. The science suite, as the second payload, houses spectrometry mechanisms and sample collection systems for in-situ analysis and life-detection tasks. MAVIS's onboard stereo camera aids in geological feature analysis, guiding soil sample collection and subsequent ATP bioluminescence and fluorescence spectrometry analyses for microbial activity and environmental assessments. Powering MAVIS is a 22.2V 22000mAh LiPo battery, managed by a comprehensive power distribution board (PDB) for optimal energy distribution. The PDB interfaces with essential components, including motor systems for steering and arm movements, with planned implementation of a robust battery monitoring system for enhanced operational insights. The software integration via ROS enables autonomous navigation, sensor fusion, and seamless communication, supported by Ubiquity devices for local area network connectivity and remote operations. MAVIS's Graphical User Interface (GUI) streamlines monitoring, command execution, and automation scripts, enhancing operational efficiency and mission success probabilities.
Motiv Space Systems intends to utilize the High-Performance Space Computing (HPSC) processor's generational leap in space-qualified computational capability to drive the next generation of smart payloads, with a focus on robotic manipulation systems. Motiv's space rated modular manipulation platform, the xLink, provides a powerful and flexible hardware platform for the HPSC to control across a wide range of operating conditions: from on-orbit servicing, assembly, and manufacturing to lunar infrastructure construction. The xLink’s 7-DOF configuration has high accuracy joint torque sensing, class-leading millimeter repeatability, and a 6-DOF force-torque sensor, enabling precise dexterous operations as well as the potential for force and compliance control needed for contact dynamics operations in many on-orbit manipulation tasks. The HPSC will act as a high level controller for the plethora of sensors and actuators in the xLink system, and will be used to execute cutting edge algorithms encompassing sensor fusion, vision-based control, and other areas that enable in-space operations.
F Prime is a free, open-source and flight-proven flight software development ecosystem developed at the NASA Jet Propulsion Laboratory that is tailored for small-scale systems such as CubeSats, SmallSats, and instruments. F Prime comprises several elements: (1) an architectural approach that decomposes flight software into discrete components with well-defined interfaces that communicate over ports; (2) a C++ framework providing core capabilities such as message queues and an OS abstraction layer; (3) a growing collection of generic components for basic features such as command dispatch, event logging, and memory management that can be incorporated without modification into new flight software projects; and (4) a suite of tools that streamline key phases of flight software development from design through integrated testing.
Advance enrollment is requested to confirm a seat at the tutorial. If you are interested in participating or have any questions, please email fprime@jpl.nasa.gov.
As the space sector expands with new types of satellites, orbital systems and services, the user segment faces escalating security threats. This segment delivers crucial services for enabling interactions between users and space systems, highlighting the need for strong security mechanisms as attack surfaces widen and become more sophisticated. In particular, the adoption of artificial intelligence (AI) in the space domain brings new attack vectors that traditional methods cannot address. To systematically analyze this emerging threat landscape, this paper develops a reference architecture to model the user segment’s components, communications and processes. We specifically assess the AI-impact on attack surface by constructing attack trees for Earth Observation scenarios with and without AI integration, using dedicated space and AI threat modeling frameworks (i.e, SPARTA and ATLAS). By comparing threats and impacts between these attack trees, we determine the unique security challenges introduced by exploiting uses of AI. These insights contribute priorities for security strategies to defend against evolving AI-driven threats, as well as specify the caveats of AI-integration in the space user segment.
The deployment of machine learning (ML) algorithms in an increasing requirement in many space craft, despite the heavy computational requirements. In this talk, the challenges around this deployment will be discussed, with comparison with ground based options. The technology required to implement these solutions in the Space environment will be discussed and the off-the-shelf reference designs and development platforms that Alpha Data provide to enable customers to achieved such solutions in FPGA and Adaptive SoC devices will be presented, including an update on the latest AMD Versal based cards.
The field of satellite imagery suffers from scarce availability of open datasets that can be used to develop novel algorithms. One of the most recent open datasets, the RarePlanes dataset, provides real satellite images with an excellent resolution and hand-made annotations. The RarePlanes dataset provides satellite images of real aircraft parked along runways. In the context of training deep convolutional neural networks (CNNs), the RarePlanes dataset has a class imbalance where some aircraft classes are sufficiently represented, and others suffer from a short supply of annotated instances. With this pitfall in mind, the RarePlanes dataset included synthetic data that can be used to compensate for problems raised during CNN training. This report assesses the use of synthetic satellite imagery to improve CNN training of real satellite images using the transfer learning (TL) technique. TL with synthetic satellite imagery is compared against TL with Common Objects in COntext (COCO) dataset and against the case of no-TL with randomly initialized weights. Results indicate that TL with synthetic satellite imagery provides better results when applied to real satellite imagery supporting the use of synthetic data for real data in CNN applications.
The High-rate Delay Tolerant Networking (HDTN) project at the NASA John H. Glenn Research Center (GRC) is developing a performance optimized Delay Tolerant Networking (DTN) implementation which is able to provide reliable multigigabit per second automated network communications for near- Earth and deep space missions. To that end, this paper provides an overview of the testing and integration efforts leading toward future infusion of HDTN with the International Space Station (ISS). Over the past year, the HDTN team has performed a series of end-to-end tests between the Software Development and Integration Laboratory (SDIL) at the Lyndon B. Johnson Space Center (JSC) and Marshall Space Flight Center’s Huntsville Operations Support Center (HOSC). The testing has focused on a realistic emulation of the ISS Ku-band RF link, which operates at a maximum of 500 Mbps downlink with a 600 ms round-trip time. In this environment, the HDTN onboard gateway has been tested for interoperability with ISS payload nodes and the DTN ground gateway, store and forward capability, reliable transport using the Licklider Transmission Protocol (LTP), and successful recovery from unexpected loss of signal. In addition to integration testing, HDTN has developed a series of software engineering practices to ensure the stability and maturity of the implementation. As the result HDTN is preparing to service a variety of flight missions, the first of which is in support of the ISS high-rate communications.
As the complexity of space-faring hardware and software increases, so does the significance of extensive automated testing, traceability and a dependable, collaborative development environment. This presentation will discuss the application of the Renode simulation framework, System Designer, Remote Device Fleet Manager and other open source tooling developed by Antmicro in the context of our customers’ space use cases, and the ways in which an open source software-driven development approach leads to better, reliable devices – with faster turnaround. We will present various aspects of the design and verification process of a modular, heterogeneous multi-node OBC system involving multiple architectures (such as Arm, RISC-V and LEON), Linux and RTOS nodes, and soft FPGA IP. Special focus will be given to complex system testing in simulation using Renode and how it enables SW/HW co-verification and integration testing. This scalable methodology, based on proven open source solutions, translates to faster time-to-market, and is successfully being applied in several current missions.
Introduction The GR716B is a radiation-hardened mixed-signal microcontroller specifically designed for spacecraft avionics. The GR716B sets itself apart from other microcontroller solutions through the performance provided and the number of interfaces supported. The GR716B is suitable for implementation of distributed control, bridging between communication buses, DC/DC control applications, FPGA and COTS supervision, and as a replacement of FPGAs in terminal units. The device entered manufacturing in December 2023. The presentation will describe the overall functionality and application examples. Architecture Based on a LEON3FT processor and two real-time accelerators (RTA), the GR716B integrates 192 KiB on-chip RAM (with EDAC) and fault-tolerant memory controllers. The LEON3FT features single cycle instructions execution and data fetch from the on-chip RAM memory. Execution determinism is guaranteed by the deterministic instruction execution time and fixed interrupt latency. The system operating frequency can be set up to 100 MHz. The microcontroller includes an embedded ROM with boot loader, a dedicated SPI memory interface with 4-byte addressing support and also an 8-bit SRAM/PROM fault tolerant memory controller capable of accessing up to 16 MiB ROM and 32 MiB SRAM. I/O interfaces include a 2-port SpaceWire router, Ethernet, MIL-STD-1553B, CAN FD, PacketWire, PWM, SPI, UART, I2C, and GPIO. The analog functions include radiation hardened cores such as DACs and ADCs, analog comparators, precision voltage reference, power-on reset, brownout detector, low drop-out regulator (LDO), LVDS transceivers, PLL and all active parts for a crystal oscillator (XO). All functionality is designed for total irradiation dose of 300 krad(Si), and analog performance including the precision voltage reference is designed for 100 krad(Si). Software Ecosystem The GR716B's Software Development Environment (SDE) includes bare metal driver support, an instruction-level simulator, and a debugger. The Zephyr open-source RTOS is being ported to the GR716B, expanding its software compatibility.
Applications The GR716B is equipped with dedicated hardware designed to support at least four independent digitally-controlled DC/DC converters. It can also accommodate complex switching power converters, including various full-bridge topologies. The execution of real-time capabilities for DC/DC applications is ensured through the close integration of the RTAs with hardware functionalities such as integrated ADC, DAC, analog comparators, among others. Additionally, the GR716B incorporates GRSCRUB, an FPGA configuration supervisor responsible for both programming and scrubbing the FPGA configuration memory. This feature aims to prevent the accumulation of errors over time. Compatible with the Kintex UltraScale and Virtex-5 AMD/Xilinx FPGA families, the core can be configured to scrub either the entire FPGA configuration memory or a specific subsection. GRSCRUB interfaces with the FPGA through the SelectMap interface.
WEDNESDAY, JULY 17, 2024
In 2021, the European Space Agency launched the "Parastronaut – Fly! Feasibility Study" to determine ways that astronauts with disabilities would be able to work in space, and hired the world's first para-astronaut. This panel will discuss how technology can enable people with disabilities and different abilities to participate in space exploration.
Avionics technology advances are needed to enable NASA's future crewed exploration and science missions. The NASA Advanced Avionics Envisioned Future provides the context of avionics technology with NASA's Space Technology Mission Directorate (STMD) and relevance to needs with the NASA's Science Mission Directorate (SDM) and Exploration Systems Development Mission Directorate (ESDMD). Specific technology gaps are presented, along with gap closure approaches, and priorities.
In space applications, the demand for high-performance computing systems capable of withstandingextreme conditions is paramount. Those systems built aroundprocessors and/or FPGAs require highly reliable and high-performancecomponents. Boot/configuration memory andprocessing memory are essential to the mission success.
The development of the SpaceVPX (ANSI/VITA 78-2022) as a baseline document to provide standard guidance for a variety of space systems. There is a large drawback with this standard, as seen with other options in the VITA Standards Organization (VSO) portfolio, where flexibility is the standard measure of goodness. The past year has seen a massive number of detailed standards work within the Sensor Open System Architecture (SOSA) effort to minimize flexibility and maximize interchangeability and portability of the basic building blocks necessary to ensure the tenets of both SOSA and the SOSA Space Subcommittee (S3C) tasked with the creation of this content. This presentation will provide an explanation in painful detail with respect to those topics the S3C completed. These topics cover hardware (PIC development), new RESET schemes, updates to Power Management, and new Utility Switch.
F Prime is a free, open-source and flight-proven flight software development ecosystem developed at the NASA Jet Propulsion Laboratory that is tailored for small-scale systems such as CubeSats, SmallSats, and instruments. F Prime comprises several elements: (1) an architectural approach that decomposes flight software into discrete components with well-defined interfaces that communicate over ports; (2) a C++ framework providing core capabilities such as message queues and an OS abstraction layer; (3) a growing collection of generic components for basic features such as command dispatch, event logging, and memory management that can be incorporated without modification into new flight software projects; and (4) a suite of tools that streamline key phases of flight software development from design through integrated testing.
Advance enrollment is requested to confirm a seat at the tutorial. If you are interested in participating or have any questions, please email fprime@jpl.nasa.gov.
In modern satellites, telemetry data is collected from the multiple instruments and subsystems on board. Parameters such as voltage, current, temperature, pressure, flow rate, magnetic field strength, electrical field strength, and mechanical strain are measured. Small satellites may have dozens or hundreds of telemetry channels, while a large, Class-A government satellite may have as many as 4,000 channels of telemetry data. This data is collected at each instrument and passed to the Telemetry Tracking and Control system on board the satellite, for transmission to the satellite’s mission control center on the ground. Analysts review the telemetry data, looking for out-of-character or out-of-specification data which could indicate a potential or developing fault condition, possibly requiring a change in operation mode of the satellite or even the preparation of another satellite to take over duties of a satellite which will imminently fail. The human analysis activities are time consuming, repetitive and error-prone, and hinder the goal of timely preparation for imminent fault conditions. Creating AI-based telemetry anomaly detection systems to fly in space would enable real-time detection of possible or developing fault conditions, and would buy valuable time for satellite operators to prepare alternative resources, minimizing loss of mission data. In this presentation we show the development of an AI-based telemetry processing and analysis system which uses long short-term memory (LSTM) recurrent neural networks (RNN), implemented on an AMD Versal Edge adaptive SoC to monitor 64 channels of telemetry data. System performance and device utilization is shown. The implementation of the anomaly detection system can be viably integrated into AMD’s Versal Edge VE2302 adaptive SoC, which will be offered in a radiation-tolerant version qualified for space flight, allowing for autonomous detection of telemetry anomalies in real time on orbit.
An overview of current and next generation processing solutions from Frontgrade Technologies and Frontgrade Gaisler.
Autonomous cyber-physical systems (CPS) are on the rise for safety- critical applications. While formal verification approaches may work on simple systems, these approaches need more scalability. When systems are sufficiently complex, testing is often the only practical way to gain confidence the system works as expected. How can we generate high- quality tests for CPS? This work proposes an approach to improve test coverage for autonomus cyber- physical systems. We achieve this by proposing a new model-based seed generation algorithm in the fuzz testing pipeline. We first use Koopman operator approaches to construct a predictor for the effect of time-varying inputs on the cyber-physical system’s behavior. Then, we use these inside a model predictive control (MPC) optimization loop, generating control inputs that drive the system to desired states. We evaluate the strategy’s effectiveness through extensive experiments on the well-known neural network air-to-air collision avoidance benchmark based on the ACAS Xu system. The proposed Koopman MPC approach achieves better test coverage than other fuzz testing and falsification tools.
As the applications for autonomous systems in space missions grow, there is a great deal of interest in assuring ethical behaviour. If humans are to cooperate with autonomous systems, there must be assurance that they are trustworthy in making ethical decisions. In particular, this applies to situations in which there is uncertainty over the outcomes of decisions.
Our poster presents an explainable, expressive framework designed for this type of decision making. Based on Sven-Ove Hansson's Hypothetical Retrospection, we present a procedure which fairly evaluates actions comparatively though morally relevant information. We demonstrate the procedure in a Mars mission scenario, in which a rover chooses whether to investigate the status of a crew, or continue acting as a communication relay. The procedure selects different explainable actions dependent on a predetermined configuration of moral principles.
Human spaceflight is extremely dangerous. The failure of a critical spacecraft system in flight can readily result in the loss of the crew or mission. Other serious consequences of system failures include significant financial losses (the unit cost of one spacecraft can exceed $1B), widespread environmental damage, national embarrassment, and the cancellation of future spaceflight missions. As missions increasingly target deep space destinations that cannot be easily supported from Earth, the risk to crew is only increasing.
To mitigate these risks, program managers require that crewed spacecraft meet stringent failure tolerance and reliability requirements. These requirements flow to every critical subsystem on the spacecraft. Chief among them is the avionic system, which includes, among other things, all the flight computers, onboard data networks, sensors, and actuators. Spaceflight programs use formal reviews, such as the Preliminary Design Review (PDR) and Critical Design Review (CDR), to assess whether the avionic architecture meets the program’s failure tolerance and reliability requirements. These reviews must be completed successfully before fabrication and implementation of the avionics system can begin.
While participating in these reviews, the NASA Engineering and Safety Center (NESC) has observed a concerning trend in which the artifacts provided for review do not provide sufficient evidence the architecture satisfies NASA’s requirements. For example, designers may attempt to reuse the avionics approach from an uncrewed vehicle in a crewed vehicle without performing a hazard analysis demonstrating that adequate hazard controls are still in place. Similarly, designers may propose a method for containing faults without describing the fault containment regions within which faults may propagate – essential information for assessing the design. It is our goal to prevent this trend from continuing in future programs.
This presentation outlines the artifacts NASA needs to assess whether a failure-tolerant avionic architecture for crewed vehicles meets its failure tolerance and reliability requirements. We split these artifacts into three categories: Requirements, Design, and Analysis. Requirements artifacts ensure the designer’s intentions align with NASA’s goals. These artifacts include derived functional and safety requirements for the avionic system, as well as a hazard analysis documenting the impact that general functional failure modes (i.e., loss or malfunction) have on the system’s ability to meet those requirements. Design artifacts describe the architectural approach and its limitations. These artifacts include an overview of the redundancy scheme, redundancy management strategy, failure masking and recovery capabilities, and any assumed limitations on fault propagation or the failure modes of system components. Finally, analysis artifacts provide evidence of the system’s performance and correctness. These artifacts include component failure mode and effects analysis, independence analysis, and reliability analysis.
We emphasize that our goal is not to advocate for a particular architecture. A great variety of avionics architectures have been used successfully in crewed spacecraft. Rather, our focus is on describing what evidence is needed to justify the choice of architecture.
This presentation is intended for avionics, software, and safety personnel working on NASA crewed spaceflight projects. It also contains important considerations for program managers trying to determine what artifacts to require at program reviews, as well as the expected maturity of those artifacts.
Krste will talk about why RISC-V vector-based processors are the right solution for space applications requiring high performance, low power consumption and long-life.
Introduction: Modern space missions are requiring increased processing reliability while providing increased security with higher autonomy and on-board processing capabilities. To accomplish this, high performance computers that can operate in harsh space environments (vibration, thermal, and radiation) are required. The presentation will discuss board development for the High-Performance Spaceflight Computing (HPSC) processor single board computer (SBC) architecture design and trades made during the development of a HPSC SBC for space applications.
Topics of Interest: Space Avionics Solutions, in orbit and spacecraft networking, ML and AI
Electrical, Electronic, Electromechanical, and Electro-optical (EEEE) parts with Military Specifications (Mil-SPEC) and specific manufacturer radiation guarantees have been the foundation of space avionics for decades. Driven by cost, schedule, and/or performance, more space-system developers are utilizing COTS (Commercial-Off-The-Shelf) parts in today’s spacecraft. COTS can be used effectively but a comprehensive COTS approach that ensures reliability, and thus mission success, has been lacking. This presentation describes the NASA activities to close this gap through efforts to ensure reliability through leveraging Industry Leading Parts Manufacturers (ILPMs) and a well-defined Radiation Hardness Assurance (RHA) approach.
This new method departs from the traditional approach of subjecting non-standard commercial parts to screening and lot acceptance testing specified in military specifications. NASA recognizes that significant manufacturing improvements have evolved in the commercial industry, with the incorporation of statistical control and a multitude of technological improvements in the fabrication process. Parts manufactured in large volumes and with automated production and testing processes have demonstrated reliability equal to or higher than their MIL-SPEC counterparts and can likely be utilized with little to no additional reliability testing or screening where evidence of sufficient quality and reliability exists.
Along with part engineering guidance, to facilitate this goal, two new terminologies were defined in the NESC study: “Industry Leading Parts Manufacturer” and “Established COTS parts.” An ILPM is a COTS manufacturer with high volume automated production facilities that produces high quality and reliable parts. Some parts produced by ILPMs, defined as Established COTS parts, do not need any additional MIL-SPEC or NASA screening and lot acceptance testing based on their tightly controlled process and product qualification. The one caveat to the this is radiation performance.
Most COTS parts are not characterized for space radiation environment by parts manufacturers. Since a part’s radiation performance depends on its fabrication technology, subtle process changes that may not impact parametric and reliability performance of the part can impact radiation performance. NASA and ILPMs must have a relationship that allows for lot traceability associated with change of wafer mask set and doping levels as the critical information to assess the appropriate part- and system-level radiation characterization and mitigation strategies.
NASA is currently developing an Agency-level Radiation Hardness Assurance (RHA) Standard that encompassed both commercial and MIL-SPEC part assurance requirements. Rather than establishing a rigid set of procedures to be followed, the standard establishes a taxonomy of RHA programs that can be applied to achieve varying degrees of RHA based on the demands of the mission, along with activities and deliverables to be complete at various stages of mission development. The emphasis on mission performance means that the approach focuses not just on part-level performance. Factors such as mission risk tolerance, environment (e.g., low radiation exposure), lifetime (e.g., short life) and application (low criticality, tolerant design, or other mitigations) are taken into consideration.
NASA recognizes that the amount of insight into COTS manufacturers, radiation characterization, and required verification evidence will differ by mission and will likely be driven by the mission's resources and associated risk posture. This new holistic approach NASA is developing addresses the rapid changing landscape of EEEE parts and how to leverage new technologies to meet NASA mission needs.
F Prime is a free, open-source and flight-proven flight software development ecosystem developed at the NASA Jet Propulsion Laboratory that is tailored for small-scale systems such as CubeSats, SmallSats, and instruments. F Prime comprises several elements: (1) an architectural approach that decomposes flight software into discrete components with well-defined interfaces that communicate over ports; (2) a C++ framework providing core capabilities such as message queues and an OS abstraction layer; (3) a growing collection of generic components for basic features such as command dispatch, event logging, and memory management that can be incorporated without modification into new flight software projects; and (4) a suite of tools that streamline key phases of flight software development from design through integrated testing.
Advance enrollment is requested to confirm a seat at the tutorial. If you are interested in participating or have any questions, please email fprime@jpl.nasa.gov.
In response to a growing demand for autonomous in-orbit operations, the pressing need for these operations to take place in an ever-widening set of environmental conditions, and the growing interest in small spacecraft, significant advancements have been made in the miniaturization and improvement of space-qualified sensing instrumentation. Yet, the effectiveness of vision-driven autonomous operations relies heavily on smaller cameras with superior low-light performance and dynamic range, for which an optimal solution is yet to be found. Single-Photon Avalanche Diode (SPAD) sensors emerge as a potential solution, offering high light sensitivity and dynamic range even with diminutive pixel sizes. However, their efficient integration with single-board computers requires bespoke interfaces and real-time processing power, typically enabled by Field Programmable Gate Arrays (FPGAs). This paper presents a dynamically reconfigurable onboard computer designed for seamless integration into the first space-qualified camera leveraging a 1-megapixel SPAD sensor. Future endeavors entail developing a demonstrator capable of acquiring exceptionally sharp and high dynamic range imagery under some of the most complex illumination conditions found in space. This paper outlines the technological advancements and a roadmap toward revolutionizing imaging capabilities in small spacecraft, paving the way for enhanced image acquisition and data processing in time-sensitive, autonomous space operations.
When deploying a system in Space, safety and reliability are key factors in determining if a system is safe for real-world deployment and if there are sufficient contingency plans for worst case scenarios. This is no different for the designs targeted for FPGAs based platform for Space deployments. Today, FPGA based designs are utilized in many safety-critical systems in the mil aero, medical, industrial, robotics, and automotive industries as well as Space. These systems require meeting stringent safety regulations such as defined by ISO 26262 standard for automotive and the IEC 61508 standard for industrial applications. To address this need, Synopsys provides a comprehensive family of integrated FPGA verification products including (VCS, Verdi, SpyGlass, Euclid, Z01X and HAPS FPGA Prototyping & Zebu Emulation Platform flows). For FPGA synthesis, Synplify provides the state of the art and most ease-of-use automated features for high reliability for Space bound FPGA devices, Synplify Synthesis tool can automate design logic insertion at module to register level and as well as provide full features for functional testing and debugging. This includes features such as logic duplication, triplication, error flag insertion and fault injection. In this presentation, we will demonstrate, how to apply high reliability features using Synplify and to make use of fault injection debug feature to verify proper operation of high reliability features for FPGA based designs.
Modern Field Programmable Gate Arrays (FPGAs) offer a solution to several issues related to real-time on-board systems, such as guaranteed execution time. They are currently considered as target platforms for space applications. However, the complexity of producing circuits on these components poses a challenge to their widespread adoption. To address this issue, high-level synthesis tools provide another layer of abstraction above the logic circuit design process, for example compiling C code into Hardware Description Languages such as VHDL or Verilog. However, high-level synthesis results are poorly predictable and do not guarantee the efficient use of recent FPGA capabilities provided by new primitives like digital signal processor or random access memory. In this paper we propose a compilation chain dedicated to reactive systems, ie. controllers, providing a more predictable synthesis process for critical embedded control applications. The implemented solution demonstrates timing performance equivalent to the traditional synthesis process with a more predictable result.
The rapid growth of the commercial satellite market and the public/private collaboration, or New Space, as many refer to it, has accelerated the deployment of edge processing workloads, such as artificial intelligence (AI), machine learning (ML), image processing and advanced networking, typically found in the commercial sectors. Please join VORAGO to learn about how we are expanding our product roadmap to meet the evolving market needs, complementing our industry leading radiation hardened MCUs.
IDEAS-TEK provides cutting-edge and cost-effective computing and embedded solutions for mission-critical applications particularly within the space industry. Central to IDEAS-TEK’s mission is the ability to develop and deploy electronics with the optimal blend of reliability, performance, cost-effectiveness, and time-to-launch, effectively addressing the wide array of challenges prevalent in today’s space sector. IDEAS-TEK’s strategy to achieve this objective is grounded in the utilization of modular standard-based systems that enable the fine-tuning of the performance/reliability trade-off by combining diverse hardware platforms within a framework that maximizes component reuse and minimizes engineering efforts.
IDEAS-TEK’s space computing ecosystem encompasses a variety of platforms, form-factors, and radiation tolerance levels that are intended to cater to a myriad of requirements ranging from critical supervisory functions, through mission control and on-board data processing, to real-time vision processing and artificial intelligence functions. These solutions, whether integrated as subcomponents such as VPX modules or complete systems like SpaceVNX+ small-form-factor systems, offer adjustable radiation tolerance levels to meet the specific demands of each mission.
IDEAS-TEK envisions the new High-Performance Spaceflight Computing (HPSC) chip-set as the cornerstone of future highly reliable heterogeneous computing systems in space. To realize this vision, IDEAS-TEK has initiated the development of a small-form-factor, SpaceVNX+ HPSC module that will leverage the full computing capacity of the HPSC chip-set along with a significant portion of its computing and networking interfaces. This approach will enable integrators to use a unique building block for configuring SWAP-C mission computers and payloads, while retaining scalability to larger systems that incorporate one or more HPSC chip-sets alongside application specific processing units like GPUs and Neuromorphic hardware.
IDEAS-TEK’s ongoing efforts to develop its SpaceVNX+ HPSC module are currently in the requirement generation stage. A feasibility study has been successfully completed, affirming the feasibility of developing and fabricating a SpaceVNX+ module based on the HPSC chip-set. IDEAS-TEK is committed to closely following Microchip’s HPSC development schedule, aiming to complete the module design in 2024, enabling the fabrication and testing of the first prototypes in early 2025.
This research effort examines the challenges, benefits, and opportunities of creating a university CubeSat program. Guidance on initial steps, funding, faculty expertise, supporting curriculum, and designing and building CubeSat components will be provided. Using a developing program at a large university as a case study, collaboration with partners in higher education will also be discussed as an invaluable part of the process.
Commercial-off-the-shelf (COTS) Field Programmable Gate Array (FPGA) System-on-Chips (SoC) are flexible platforms combining a Processing System featuring high-performance embedded-class processors with a configurable FPGA subsystem for the deployment of hardware accelerators and ad-hoc custom peripherals. Flexibility combined with demonstrated high performance and high energy efficiency make COTS FPGA SoCs particularly attractive for space applications. One of the main challenges in using COTS FPGA SoCs in space applications is the susceptibility of the FPGA technology to Single Event Upsets (SEU) caused by ionizing particles. This paper presents Open-CFR, an Open-source Co-design Framework for redundant execution of hardware accelerators on COTS FPGA SoC platforms. Open-CFR provides: (i) an automatically generated, hardware shell supporting voting and detection, specifically tailored for the interface of a target hardware accelerator (ii) a full Dynamic Partial Reconfiguration flow for fast recovery, and (iii) a generated software runtime executable for the setup and runtime management of the whole redundant system. Open-CFR automates the creation of the design, providing as output the bitstreams and software executable for the target FPGA SoC platform. We evaluate the performance of Open-CFR and compare them with state-of-the-art solutions on realistic experimental scenarios and on a use-case scenario deploying the HLS4ML framework on popular COTS FPGA SoCs from the ZYNQ family from AMD-Xilinx.
Time critical networking has been used for space flights and aerospace applications in general. Multiple standards emerged. This talk discusses the standards and technologies (rad-hard chips, eFPGA device, IP cores) that enabling them and other aerospace usage.
Moog Broad Reach’s Motor Control Electronics (MCE) uses the Microchip Technology SAMRH707 Rad-Hard Microcontroller to provide motor control for multiple space applications. This technical presentation will discuss the advantages and challenges of using the SAMRH707 microcontroller compared to Moog’s heritage FPGA based solutions for motor control. Lessons learned during the development of the MCE product for a human-rated space applications will also be discussed. Additional topics will include the microcontroller architecture feature set overview for motor control and the associated software development platform.
F Prime is a free, open-source and flight-proven flight software development ecosystem developed at the NASA Jet Propulsion Laboratory that is tailored for small-scale systems such as CubeSats, SmallSats, and instruments. F Prime comprises several elements: (1) an architectural approach that decomposes flight software into discrete components with well-defined interfaces that communicate over ports; (2) a C++ framework providing core capabilities such as message queues and an OS abstraction layer; (3) a growing collection of generic components for basic features such as command dispatch, event logging, and memory management that can be incorporated without modification into new flight software projects; and (4) a suite of tools that streamline key phases of flight software development from design through integrated testing.
Advance enrollment is requested to confirm a seat at the tutorial. If you are interested in participating or have any questions, please email fprime@jpl.nasa.gov.
Anomalous behavior can pose serious risks in the operation of complex, high-consequence systems. Detection is complicated and challenging, especially with high-dimensional data and varying data types, such as with satellite state-of-health (SOH) telemetry. Existing detection methods tend to perform poorly in such situations or suffer from satellite constraints on computational power and data availability. Current operational approaches rely on manual, retrospective analysis of system failures, which can result in lengthy response times, risking degraded capabilities and delays in satellite operations. We leverage streaming machine learning capabilities to develop a performant and scalable method for anomaly detection in multi- modal satellite SOH telemetry data. By building on a k-means clustering approach, our approach demonstrates anomaly detection capabilities that (1) update their joint, multivariate estimation of the current SOH variables in an online and continuous fashion, (2) operate in near real-time with reduced computational and data requirements, and (3) provide meaningful detections with an interpretable feature space mapping to relevant variables to support further operator diagnosis and response. Using real satellite SOH data as a case study, we demonstrate how our method can provide automated and adaptive diagnostic information, which can increase robustness in space operations through more rapid detection of satellite anomalies.
Successful missions start with Avalanche’s game changing MRAM. With sufficient density, radiation resilience, endurance and high reliability to enable SW-defined hardware platforms for robust Boot & Storage, and is at the heart of next gen AI Multiprocessing Architectures.
Traditional space computing relies on older, space proven hardware and software technologies with significantly lower performance compared to COTS processors. Recently a trend towards the use of COTS processors in New Space, raised the potential of Space Cloud solutions, in which multiple users share the satellite. These solutions rely on the same software stacks used in conventional cloud deployments, namely Linux which is not qualifiable for space use. In this paper, we describe the hardware solution developed in the METASAT project, which provides an alternative space cloud infrastructure, which is fully qualifiable for institutional missions.
In addition, we provide more details on the traditional space computing as well as new space and the implementation details of cloud space solutions. We demonstrate their deficiencies on institutional missions where qualification is required, and we describe in more detail the approach followed in the Horizon Europe METASAT project, showing its benefits in terms of space qualification.
THURSDAY, JULY 18, 2024
Over decades of experience at the UC Davis W.M. Keck Center for ActiveVisualization in the Earth Sciences (KeckCAVES), collaborative virtualreality has proven itself as a powerful medium for interactive analysisof scientific data.
In this presentation, we will discuss the example of how KeckCAVES VRsoftware was used during landing site selection for the 2012 Curiositymission, and how the experience gained during that and other projects isinforming our current efforts to develop a VR framework supporting awide range of collaborative applications running on low-cost commodityVR hardware.
Anomaly detection in spacecraft telemetry is critical for the success and safety of space missions. Traditional methods often rely on forecasting and threshold techniques to identify anomalies [1]–[5]. This paper presents a comprehensive comparison of traditional forecast-based anomaly detection against two innovative classification methods, including a direct classification and an image classification through Gramian Angular Field (GAF) transforms [6], which have only been analysed in other domains but not for spacecraft anomaly detection. All our investigated systems leverage deep learning architectures and use the popular real SMAP/MSL spacecraft data from [2]. Our findings suggest that direct classification provides a marginal but statistically significant improvement in anomaly detection over traditional methods. However, image classification, while less successful, offers promising directions for future research. The study aims to guide the selection of appropriate anomaly detection techniques for spacecraft telemetry and contribute to the advancement of automated monitoring systems in space missions.
The presentation will discuss standards-based board development and design for the high-performance single board computers. An overview will be provided highlighting the advantages of standards-based SBC designs. These capabilities are essential in providing system integration simplification in increasingly complex systems.
In space applications, the adoption of commercial-off-the-shelf (COTS) single-board computers (SBCs) is increasingly favored due to their size, weight, and power (SWaP) efficiency. This study addresses the critical need for understanding the radiation tolerance of such devices within low earth orbit (LEO) space missions, where intense ionizing radiation presents a substantial risk to electronic component functionality. Focusing on the NVIDIA Jetson Orin NX, a leading COTS SBC, we evaluated the radiation resilience of both its 8GB and 16GB models under total ionizing dose (TID) conditions. Our investigation reveals significant consistency in radiation tolerance among the models tested, surviving past 36.20 krad(Si). This underscores the considerable resilience to the effects of radiation and the absence of performance degradation despite challenges related to thermal management. These findings are crucial for the aerospace community, informing the deployment of COTS SBCs in environments with high radiation exposure and impacting considerations for mission success and device longevity.
This is a VR demo of the system presented in the associated talk. Abstract follows:
Over decades of experience at the UC Davis W.M. Keck Center for Active Visualization in the Earth Sciences (KeckCAVES), collaborative virtual reality has proven itself as a powerful medium for interactive analysis of scientific data.
In this presentation, we will discuss the example of how KeckCAVES VR software was used during landing site selection for the 2012 Curiosity mission, and how the experience gained during that and other projects is informing our current efforts to develop a VR framework supporting a wide range of collaborative applications running on low-cost commodity VR hardware.
One day, everything that moves will be autonomous. Robotic automation has made significant strides forward, driven by advancements in hardware and artificial intelligence capabilities that have opened new avenues in simulation and strive for autonomy. This workshop we will give a technical introduction to the Omniverse and Isaac SIM platforms, a cutting-edge solution for robotics and simulation.
We will start off with a generic presentation section to introduce use-cases, value and vision of the platform and some examples on how it can be applied to the space industry. Next we'll move over to a more technical hands-on lab where you'll dive into the simulation loop of a 3D engine, learning to initialize experiments with objects, robots, and physics logic, and build some small robotics control tasks and applications within the simulation environment .
The hands-on piece is a technical beginner level, and thus you don't need any prior knowledge on Isaac SIM, apart from basic python understanding.
Requirements
Note: We will use the NVIDIA Deep Learning Institute platform for the hands-on portion of this workshop. Attendees will be handed a personal code during the workshop, that will give them access to one of the self-paced paid courses. The codes will be shared during the workshop. These are personal, and can only be redeemed to one specific course only. You can find more information on how to redeem the DLI platform codes in the attached pdf.
Additionally, course content and access to the environment will be given for up to 1 year after the workshop. There are also other self-paced courses available for further learning.
The emerging paradigm of neuromorphic computing has the potential to provide high-performance, low-power computing for edge artificial intelligence. This promise of high performance and low power consumption makes neuromorphic computing devices attractive for platforms at the edge: those that are constrained in size, weight, and power. Spacecraft fall into the category of edge platforms. In space, computing devices are subject to radiation effects not present on Earth, including single event effects (SEE) and total ionizing dose (TID). In this paper, we present the results of performing proton SEE testing and TID testing on an exemplar neuromorphic processor, the Intel Loihi.
Extending from components to modules to plug in cards, power and data transport systems have new options. New product offerings are emerging from a renewed look at the needs of today’s missions and today’s mission assurance approaches.
This work evaluated the single event functional interrupt (SEFI) response of the commercial-off-the-shelf (COTS) and radiation hardened Cortex-M4 (M4) microcontrollers with the Armv7-M ISA. The microcontrollers were exposed to 200 MeV protons. The COTS M4 was further evaluated under carbon ions and alpha particles to assess if the control bits that depend on the software could have a significant influence on the SEFI cross section for this ISA. The SEU results show that both microcontrollers can experience multiple-cell upsets (MCUs), which could facilitate the accumulation of multiple-bit upsets (MBUs). MBUs could be a concern even for radiation hardened systems with ECC, because ECC crashes the system (SEFI) to correct MBUs.
Flight proven and planned rad hard and rad tolerant single board computers, supporting memory technologies including DDR4 and high density managed NAND components.
In this presentation we review qualification and radiation data for the AMD XQR Versal adaptive SoC devices, and look at how their reconfigurable heterogeneous computing resources enable demanding applications such as digital beamforming and STAP radar processing to be performed on orbit.
One day, everything that moves will be autonomous. Robotic automation has made significant strides forward, driven by advancements in hardware and artificial intelligence capabilities that have opened new avenues in simulation and strive for autonomy. This workshop we will give a technical introduction to the Omniverse and Isaac SIM platforms, a cutting-edge solution for robotics and simulation.
We will start off with a generic presentation section to introduce use-cases, value and vision of the platform and some examples on how it can be applied to the space industry. Next we'll move over to a more technical hands-on lab where you'll dive into the simulation loop of a 3D engine, learning to initialize experiments with objects, robots, and physics logic, and build some small robotics control tasks and applications within the simulation environment .
The hands-on piece is a technical beginner level, and thus you don't need any prior knowledge on Isaac SIM, apart from basic python understanding.
Requirements
Note: We will use the NVIDIA Deep Learning Institute platform for the hands-on portion of this workshop. Attendees will be handed a personal code during the workshop, that will give them access to one of the self-paced paid courses. The codes will be shared during the workshop. These are personal, and can only be redeemed to one specific course only. You can find more information on how to redeem the DLI platform codes in the attached pdf.
Additionally, course content and access to the environment will be given for up to 1 year after the workshop. There are also other self-paced courses available for further learning.
Heavy ion DSEE and NDSEE and gamma-induced TID characterization results are presented for recent-generation e.MMC managed flash devices.
Building towards a sustained lunar and deep-space presence requires advances in space infrastructure. Technologies commonly used on Earth, such as computer displays, cannot be naively incorporated into flight systems due to reliability concerns. Radiation-tolerant crew displays represent a critical technology gap NASA aims to address as part of its Moon to Mars roadmap. To date, crew displays lag significantly behind state-of-the-art terrestrial systems. Constrained by environmental challenges, legacy hardware, and safety requirements, current crew displays lack the ability to deliver high-resolution graphics thus restricting visual communication capabilities and crew autonomy. Moreover, the varying requirements of a diverse collection of surface and orbital lunar assets further complicate finding a generalized solution and design. These limitations and challenges necessitate the development of enhanced avionic systems to support future crewed missions. In this paper, we present key design considerations, a methodology, and a preliminary architecture to realize a radiation-tolerant display system. Balancing radiation tolerance, compute capability, and scalability, this paper describes a methodology to optimize the performance and reliability of a display computing system. Leveraging a hybrid design of commercial-off-the-self (COTS) and radiation-hardened (rad-hard) processors and components, a preliminary architecture is presented that includes hardware and software mitigation strategies that consider both cumulative and acute radiation effects. Lastly, a prototype is presented for benchmarking and validation.
FRIDAY, JULY 19, 2024
|
Your sponsorship is a strong statement about your organization's commitment to the field of Space Computing. SMC-IT/SCC 2024 continues to offer exciting opportunities for sponsors. Please refer to the Sponsor Prospectus and Sponsor Guide and Order Form for further sponsorship information.
If you have any questions, feel free to contact us at:
Copyright © IEEE SMC-IT/SCC 2024 - Privacy Policy