Cercana Executive Briefing — Week of March 28–April 3, 2026

152 feeds monitored. Published April 3, 2026.

Executive Summary

The most consequential development this week was the publication of the CNG Geo-Embeddings Sprint report, which moved earth observation embeddings from an emerging research thread into the standards-drafting phase. Co-hosted by CNG, Planet, and Clark University, the March sprint produced concrete patterns for storing, cataloging, and accessing EO embeddings. This is the kind of infrastructure specification work that typically precedes commercial adoption. This matters because embeddings are one of the mechanisms through which satellite imagery can be translated into forms that AI systems can use at scale. Organizations making infrastructure decisions about their EO data pipelines should be watching this thread closely. Standards that solidify here will shape which analytics platforms interoperate and which become walled gardens.

In parallel, the defense and intelligence conversation intensified. Project Geospatial published two substantial pieces, one on GeoAI-driven military targeting ethics and another on the geopolitics of quantum gravity gradiometry, while Octave rebranded Luciad as Alto 2026.0 with explicit cyberthreat visualization capabilities for defense. Taken together, these developments suggest that the defense geospatial market is expanding its technical scope while also confronting the ethical consequences of that expansion. Meanwhile, Canada’s national geospatial strategy consultations drew critical coverage revealing a system with depth but without alignment, and Australia launched the Locus Alliance to replace its collapsed national geospatial body. The pattern across both is institutional. Countries are renegotiating how geospatial infrastructure is governed, and the outcomes will likely shape procurement structures for years.

Major Market Signals

EO Embeddings Move from Research to Standards

The CNG Geo-Embeddings Sprint brought together Planet, Clark University, and other organizations to draft best practices for storing, cataloging, and accessing Earth observation embeddings. This is not another AI capabilities announcement. It is infrastructure specification work. When the EO community starts defining how embeddings are stored and discovered, it suggests that the technology has matured enough for interoperability to matter. For platform vendors and data infrastructure buyers, this is the stage at which architectural decisions can begin to lock in compatibility or isolation. The sprint outputs are headed for community review, which means a public comment period that organizations with EO data pipelines should engage with.

Defense Geospatial Expands Scope and Confronts Ethics Simultaneously

Two distinct but convergent threads emerged this week. Octave launched Alto 2026.0, the rebranded Luciad platform, adding cyberthreat visualization for defense and extending geospatial situational awareness into the cyber domain. Simultaneously, Project Geospatial published a deeply personal account of how GeoAI-driven military targeting is eroding the oversight structures that governed intelligence operations for decades. That convergence is the real signal. The defense geospatial market is rapidly expanding what these tools can do while governance frameworks struggle to keep pace. For vendors entering the defense space, the ethics conversation is no longer peripheral. It is becoming a procurement consideration.

National Geospatial Governance Under Reconstruction

Canada’s NRCan geospatial strategy consultations drew critical analysis from GoGeomatics and EarthStuff, revealing systemic gaps in coordination, infrastructure governance, and institutional alignment. In Australia, the newly formed Locus Alliance launched to fill the void left by the collapsed Geospatial Council of Australia. These are not isolated developments. Multiple countries are simultaneously renegotiating how geospatial infrastructure is governed at the national level. For vendors and service providers, the restructuring of national geospatial bodies directly shapes procurement pipelines, standards adoption, and public-sector contract structures.

Quantum Sensing Enters the Geospatial Regulatory Conversation

Project Geospatial’s deep analysis of quantum gravity gradiometry regulation, combined with SBQuantum’s announcement of a space-bound quantum magnetometer as part of the US government’s MagQuest Challenge, marks the week when quantum sensing moved from theoretical interest to a dual regulatory and commercial question. Quantum gradiometry can reveal subsurface structures, including those with defense and resource extraction significance, at resolutions that current regulatory frameworks were not designed to address. It is still early, but this looks like a genuine market-formation signal worth tracking.

Notable Company Activity

Product Releases

  • Octave (Alto 2026.0): Rebranded its Luciad platform as Alto and launched version 2026.0 with cyberthreat visualization capabilities for defense situational awareness. The rebrand sharpens Octave’s positioning as a defense-focused geospatial intelligence platform.
  • MapTiler: Released April updates that included professional grid overlays and a major satellite imagery refresh across its basemap products.
  • USGS: Released a machine learning tool that forecasts streamflow drought conditions up to 90 days ahead nationwide. This is a significant applied AI deployment for water resource management.
  • Esri: Released the Protected Area Management solution and March 2026 ArcGIS Solutions update, alongside updates to its geocoding and world traffic services.

Partnerships

  • DroneDeploy × Cairn: Enterprise-wide aerial and ground reality capture partnership for housing development portfolio management. A useful example of reality capture moving from project-level use to portfolio-level deployment in construction.
  • Astroscale × Exotrail: Advancing France-Japan cooperation on space sustainability and on-orbit servicing, backed by a visit from Emmanuel Macron and Sanae Takaichi.

Funding & M&A

  • Xona Space Systems: Closed an oversubscribed $170M Series C to accelerate deployment of its Pulsar LEO navigation constellation. Investors include Hexagon, Craft Ventures, and Samsung Next. The round suggests strong market confidence in GPS-alternative positioning infrastructure.
  • Trimble: Signed an agreement to acquire Document Crunch, an AI-powered construction document analysis and risk management company, integrating it into the Trimble Construction One ecosystem.
  • Woolpert: Awarded a $49.9M USACE contract to support I-ATLAS coastal mapping and nautical charting efforts. It is a significant federal LiDAR and survey contract.

Government and Policy Developments

The US National Geodetic Survey’s NSRS modernization effort was the subject of a Geo Week News webinar bringing together NGS leadership and the geospatial community. The message was practical. The reference frame transition is coming, and the community should be preparing rather than worrying. For survey and mapping firms, the modernization will affect nearly every coordinate-dependent workflow in the US market, and early preparation is likely to be an advantage.

Canada’s geospatial strategy consultations drew substantive analysis revealing that while the country has depth in geospatial capabilities, it lacks consistent alignment across governance, infrastructure, and coordination. NRCan’s consultations are surfacing long-standing structural problems rather than resolving them. Sparkgeo’s piece on building a national urban forest data view illustrated both the ambition and the fragmentation of Canadian geospatial infrastructure.

In Australia, the Locus Alliance launched as a new national geospatial body to fill the gap left by the Geospatial Council of Australia’s collapse. The Alliance aims to be structurally different from its predecessor, though details of its governance are still emerging. The Ordnance Survey in the UK announced that its National Geographic Database now holds 16 data collections and 70 major enhancements, positioning Britain’s national mapping as a continuously updated digital product rather than a periodic release.

Technology and Research Trends

The technology story of the week centers on the maturation of EO data infrastructure. CNG’s Geo-Embeddings Sprint produced actionable specifications rather than aspirational roadmaps, while EarthDaily published on how real-time crop signals from its constellation are changing agricultural market decisions. It was one of the rarer posts connecting EO technology directly to commercial demand-side outcomes. TerraWatch’s “Anatomy of an Earth Observation Use Case” offered a structural critique of how the EO industry uses (and misuses) the term “use case,” pushing toward more rigorous framing of what makes an EO application commercially viable.

The Spatial Edge’s weekly digest covered satellites mapping local human development levels, LLMs estimating flood damage without training data, and foundation models for ecological mapping. Taken together, these offer a concentrated snapshot of where applied spatial data science seems to be heading. The through-line across these is the compression of traditional multi-step geospatial workflows into single-model inference. That has implications for both the skills market and the value chain.

Spatialists covered Stefan Ziegler’s work raster-enabling Apache Hop with GDAL-based transforms, demonstrating practical LiDAR-to-building-height ETL pipelines. This is exactly the kind of hands-on cloud-native geospatial tutorial content that has been persistently absent from the ecosystem. It stands out in part because material of this kind is still relatively rare.

Open Source Ecosystem Signals

The open-source ecosystem had a quieter week following the QGIS 4.0 and FOSSGIS 2026 activity of recent weeks. geoObserver noted a QGIS tip on SFCGAL functions now available as a plugin, which is better understood as a post-4.0 ecosystem refinement than a headline release. geoObserver also reflected on FOSSGIS 2026 and celebrated 44,444 downloads of the GeoBasis_Loader plugin, a milestone for the German open-data geospatial tooling community.

The Spatialists coverage of Apache Hop raster enablement is worth flagging here as well: the hop-gdal-plugin extends an open-source ETL framework with geospatial raster capabilities, bridging the gap between data engineering and geospatial processing. It represents the kind of cross-pollination between general-purpose open-source tooling and geospatial-specific capabilities that tends to strengthen the broader ecosystem.

The CNG Geo-Embeddings Sprint, covered in Market Signals above, also carries open-source ecosystem significance: the sprint’s outputs are intended for community review and adoption, meaning they will likely influence how open-source EO tooling handles embedding storage and discovery.

Watch List

  • Spiral Blue (Australia): Delivered space LiDAR hardware to a UK company as part of its strategy to build an EO space LiDAR capability. Space-based LiDAR is a nascent market with potentially transformative implications for forestry, bathymetry, and terrain mapping if costs come down.
  • Geospatial data and the EU Deforestation Regulation (EUDR): Coverage on Medium explored the geolocation data challenges and compliance implications of the EUDR. The regulation will create a distinct demand signal for geospatial verification services across commodity supply chains.
  • SBQuantum’s space quantum magnetometer: The MagQuest-funded mission could initiate a new class of GPS-independent navigation and subsurface sensing from orbit. If the technology performs, it opens regulatory and commercial questions that the geospatial industry has not yet grappled with.
  • Mainz cloud-native geospatial infrastructure: A German city implementing a fully cloud-based geospatial data infrastructure with VertiGIS and Esri. Municipal adoption of cloud-native GDI at this scale is an early but meaningful demand signal for enterprise cloud geospatial platforms in European public administration.

Top Posts of the Week

  1. Geo-Embeddings Sprint: Advancing standards for Earth observation embeddings (CNG Blog) moves EO embeddings from research into standards specification, with direct implications for data infrastructure interoperability.
  2. The New Battlespace: How Geospatial AI, Outdated Intelligence, and the Illusion of Oversight Are Reshaping Military Targeting (Project Geospatial) is a deeply informed and personal account of how GeoAI is outpacing the governance structures designed to prevent intelligence failures.
  3. The Anatomy of an Earth Observation Use Case (TerraWatch Space) offers a structural critique of how the EO industry frames commercial viability and pushes beyond “use case” as marketing shorthand.
  4. The Subsurface Geopolitics: Regulating the Commercial Use of Quantum Gravity Gradiometry (Project Geospatial) maps the emerging regulatory landscape for a technology that can reveal what lies underground at unprecedented resolution.
  5. Three Geospatial AI Myths Federal Buyers Should Not Believe (Cercana Systems) provides practical procurement-focused guidance that cuts through GeoAI marketing claims for federal decision-makers.

Cercana Executive Briefing is derived from 152 feeds aggregated by geofeeds.me.

Variations of Open

Introduction

The word “open” gets used so often in tech that it starts to feel universal, like everyone must be talking about the same thing. But once you listen closely, it becomes obvious that different groups mean very different things when they say it. A software engineer is thinking about readable source code and licenses. Someone who works with data is thinking about public portals and Creative Commons. People in AI might be picturing model weights you can download even if you can’t see how the model was trained. And increasingly, someone might just mean information that’s publicly visible online, such as social media posts, ship trackers, or livestreams, without any license at all.

None of these interpretations are wrong. They just grew out of different needs. Openness meant one thing when it applied to code, something else entirely when governments began releasing public data, and now it’s shifting again as AI models enter the mix. Meanwhile, the rise of OSINT has blurred things further, with “open” sometimes meaning nothing more than “accessible to anyone with an internet connection.”

The result is that modern systems combine pieces from all these traditions and people often assume they’re aligned when they’re not. The friction shows up not because anyone misunderstands the technology, but because the language hasn’t kept up with how quickly the idea of openness has expanded.

Open-Source Software

In terms of software, “open” means open-source. In that context it has a clear meaning. You can read the code, change it, and share it as long as you follow the license. That predictability is a big part of why the movement grew. People understood the rules and trusted that the rules would hold.

But the full spectrum of open-source shows up in the habits and culture around it. Communities develop their own rhythms for how to submit a pull request, file a useful bug report, talk through disagreements, and decide which features or fixes make it into a release. None of that comes from a license. People learn it by watching others work, answering questions in long issue threads, or showing up in mailing lists and channels where projects live.

There’s also an unspoken agreement inside open-source software. If a project helps you, you try to help it back. Maybe you fix a typo, or you donate, or you answer someone else’s question. It’s not required, but it’s how projects stay healthy.

Anyone who has maintained an open-source project knows it isn’t glamorous. It can be repetitive, sometimes thankless, and often demanding. Good maintainers end up juggling technical decisions, community management, and the occasional bit of diplomacy.

All this shapes a shared understanding of what openness means in software. People think not just about reading code, but about the whole ecosystem built around it: contribution paths, governance models, release practices, and the blend of freedom and responsibility that holds everything together.

Once the idea of openness moved beyond software, that ecosystem didn’t necessarily apply. As other fields developed their own approaches to openness, patterns and practices evolved in alignments with each unique domain.

Open Data

Open data developed along its own path. Instead of code, publishers released information about the world: transit schedules, land use maps, environmental readings, census tables. The goal was simple: make public data easier to access so people could put it to use.

Because software licenses didn’t fit, data and content licenses such as Creative Commons were developed. CC BY and CC0 became common. Open Data Commons created specialized database licenses—ODbL added share-alike requirements specifically for databases, while PDDL offered a public domain dedication. You can see the differences in well known datasets. OpenStreetMap’s ODbL means derived data often has to stay open and always require attribution. USGS datasets, which are mostly public domain, are easy to fold into commercial tools. City transit feeds under CC BY only ask for attribution.

Privacy concerns complicate open data, which isn’t exempt from GDPR, CCPA, or similar laws. Even seemingly innocuous data can reveal personal information—location datasets showing frequent stops at specific addresses, or timestamped transit records that establish movement patterns. Many publishers address this through aggregation, anonymization, or by removing granular temporal and spatial details, but anyone working with open data still ends up checking metadata, tracking where files came from, and thinking about what patterns a dataset might reveal.

Open Source Information (OSINT)

Open-source Intelligence (OSINT) is an overlapping but different concept from open data. Information is considered “open” in OSINT because anyone can access it, not because anyone has the right to reuse it. A ship tracker, a social media post, a livestream from a city camera are examples of data that may fall into this category.

These sources vary widely in reliability. Some come from official databases or verified journalism. Others come from unvetted social media content, fast-moving crisis reporting, or user-generated material with no clear provenance. OSINT analysts rely heavily on validation techniques such as correlation, triangulation, consensus across multiple sources, and structured analytic methods.

While OSINT has deep roots in government intelligence work, it is now widely practiced across sectors including journalism, cybersecurity, disaster response, financial services, and competitive intelligence. Marketing technologies have expanded OSINT further into the private sector, making large-scale collection and analysis tools widely accessible.

Confusion can arise when open data and OSINT are treated as interchangeable. Someone may say they used open data, meaning a licensed dataset from a government portal. Someone else hears open and assumes it means scraping whatever is publicly visible.

This distinction matters because the two categories carry fundamentally different permissions and obligations. Open data comes with explicit rights to reuse, modify, and redistribute—legal clarity that enables innovation and collaboration. OSINT, by contrast, exists in a gray area where accessibility doesn’t imply permission, and users must navigate copyright, privacy laws, and terms of service on a case-by-case basis. 

Understanding this difference isn’t just semantic precision; it shapes how organizations design data strategies, assess legal risks, and build ethical frameworks for information use. When practitioners clearly specify whether they’re working with licensed open data or publicly accessible OSINT, they help prevent costly misunderstandings and ensure their work rests on solid legal and ethical foundations.

Open AI Models

In AI, openness takes on another meaning entirely. A model is more than code or data. It’s architecture, training data, weights, and the training process that binds everything together. So when a model is described as open, it’s natural to ask which part is actually open.

You see the variety in projects released over the past few years. Some groups publish only the weights and keep the training data private. Meta’s Llama models fall into this category. You can download and fine tune them, but you don’t see what went into them. Others release architectural details and research papers without sharing trained weights—early transformer work from Google and OpenAI showed the approach without providing usable models. GPT-NeoX took a middle path, releasing both architecture and weights but with limited training data transparency.

A few projects aim for full transparency. BLOOM is the most visible example, with its openly released code, data sources, logs, and weights. It took a global effort to pull that off, and it remains the exception, though projects like Falcon and some smaller research models have attempted similar transparency.

This partial openness shapes how people use these models. If you only have the weights, you can run and fine tune the model, but you can’t inspect the underlying data. When the training corpus stays private, the risks and biases are harder to understand. And when licenses restrict use cases, as they do with Llama’s custom license that prohibits certain commercial applications, or research-only models like many academic releases, you might be able to experiment but not deploy. Mistral’s models show another approach—Apache 2.0 licensing for some releases but custom licenses for others.

The idea of contribution looks different too. You don’t patch a model the way you patch a library. You build adapters, write wrappers, create LoRA fine-tunes, or train new models inspired by what came before. Openness becomes less about modifying the original artifact and more about having the freedom to build around it.

So in AI, open has become a spectrum. Sometimes it means transparent. Sometimes it means accessible. Sometimes it means the weights are downloadable even if everything else is hidden. The word is familiar, but the details rarely match what openness means in software or data.

Real World Considerations

These differences are fairly straightforward when they live in their own domains. Complexity can arise when they meet inside real systems. Modern projects rarely stick to one tradition. A workflow might rely on an open-source library, an open dataset, publicly scraped OSINT, and a model with open weights, and each piece brings its own rules.

Teams can run into this without realizing it. Someone pulls in an Apache-licensed geospatial tool and combines it smoothly with CC BY data. These work fine together. But then someone else loads OpenStreetMap data without noticing the share-alike license that affects everything it touches. A third person adds web-scraped location data from social media, not considering the platform’s terms of service or privacy implications. A model checkpoint from Hugging Face gets added on top, even though the license limits commercial fine-tuning. Most of these combinations are manageable with proper documentation, but some create real legal barriers.

Expectations collide too. A software engineer assumes they can tweak anything they pull in. A data analyst assumes the dataset is stable and comes with clear reuse rights. An OSINT analyst assumes publicly visible means fair game for analysis. Someone working with models assumes fine-tuning is allowed. All reasonable assumptions inside their own worlds, but they don’t line up automatically.

The same thing happens in procurement. Leadership hears open and thinks it means lower cost or fewer restrictions. But an open-source library under Apache is not the same thing as a CC BY dataset, neither is the same as scraped public data that’s accessible but not licensed, and none of those are the same as an open-weight model with a noncommercial license.

Geospatial and AI workflows feel these tensions even more. They rarely live cleanly in one domain. You might preprocess imagery with open-source tools, mix in open government data, correlate it with ship tracking OSINT, and run everything through a model that’s open enough to test but not open enough to ship. Sometimes careful documentation and attribution solve the problem. Other times, you discover a share-alike clause or terms-of-service violation that requires rethinking the entire pipeline.

This is when teams slow down and start sorting things out. Not because anyone did something wrong, but because the word open did more work than it should have and because “publicly accessible” got mistaken for “openly licensed.”

Clarifying Open

A lot of this gets easier when teams slow down just enough to say what they actually mean by open. It sounds almost too simple, but it helps. Are we talking about open code, open data, open weights, open access research, or just information that’s publicly visible? Each one carries its own rules and expectations, and naming it upfront clears out a lot of the fog.

Most teams don’t need a formal checklist, though those in regulated industries, government contracting, or high-stakes commercial work often do. What every team needs is a little more curiosity about the parts they’re pulling in—and a lightweight way to record the answers. If someone says a dataset is open, ask under what license and note it in your README or project docs. If a model is open, check whether that means you can fine-tune it, use it commercially, or only experiment with it—and document which version you’re using, since terms can change. If a library is open-source, make sure everyone knows what the license allows in your context. If you’re using publicly visible information—social media posts, ship trackers, livestreams—be clear that this is OSINT, not licensed open data, and understand what legal ground you’re standing on.

These questions matter most at project boundaries: when you’re about to publish results, share with partners, or move from research to production. A quick decision log—even just a shared document listing what you’re using and under what terms—prevents expensive surprises. It also helps when someone new joins the team or when you revisit the project months later.

The more people get used to naming the specific flavor of openness they’re dealing with and writing it down somewhere searchable, the smoother everything else goes. Projects move faster when everyone shares the same assumptions. Compliance reviews become straightforward when the licensing story is already documented. Teams stop discovering deal-breakers right when they’re trying to ship something. It’s not about being overly cautious or building heavy process. It’s just about giving everyone a clear, recorded starting point before the real work begins.

Conclusion

If there’s a theme running through all of this, it’s that the word open has grown far beyond its original boundaries. It meant one thing in software, something different in the world of public data, another in AI, and gets stretched even further when people conflate it with simply being publicly accessible. Each tradition built its own norms and expectations, and none of them are wrong. They just don’t automatically line up the way people sometimes expect.

Most of the friction we see in real projects doesn’t come from bad decisions. It comes from people talking past one another while using the same word. A workflow can look straightforward on paper but fall apart once you realize each component brought its own version of openness, or that some parts aren’t “open” at all, just visible. By the time a team has to sort it out, they’ve already committed to choices they didn’t realize they were making.

The good news is that this is manageable. When people take a moment to say which kind of open they mean, or acknowledge when they’re actually talking about OSINT or other public information, everything downstream gets smoother: design, licensing, procurement, expectations, even the conversations themselves. It turns a fuzzy idea into something teams can actually work with. It requires ongoing attention, especially as projects grow and cross domains, but the effort pays off.

Openness is a powerful idea, maybe more powerful now than ever. But using it well means meeting it where it actually is, not where we assume it came from.

At Cercana Systems, we have deep experience with the full open stack and can help you navigate the complexities as you implement open assets in your organization. Contact us to learn more.

Header image credit: Aaron Pruzaniec, CC BY 2.0 https://creativecommons.org/licenses/by/2.0, via Wikimedia Commons

Geospatial Without Maps

When most people hear “geospatial,” they immediately think of maps. But in many advanced applications, maps never enter the picture at all. Instead, geospatial data becomes a powerful input to machine learning workflows, unlocking insights and automation in ways that don’t require a single visual.

At its core, geospatial data is structured around location—coordinates, areas, movements, or relationships in space. Machine learning models can harness this spatial logic to solve complex problems without ever generating a map. For example:

  • Predictive Maintenance: Utility companies use the GPS coordinates of assets (like transformers or pipelines) to predict failures based on environmental variables like elevation, soil type, or proximity to vegetation (AltexSoft, 2020). No map is needed—only spatially enriched feature sets for training the model.
  • Crop Classification and Yield Prediction: Satellite imagery is commonly processed into grids of numerical features (such as NDVI indices, surface temperature, soil moisture) associated with locations. Models use these purely as tabular inputs to predict crop types or estimate yields (Dash, 2023).
  • Urban Mobility Analysis: Ride-share companies model supply, demand, and surge pricing based on geographic patterns. Inputs like distance to transit hubs, density of trip starts, or average trip speeds by zone feed machine learning models that optimize logistics in real time (MIT Urban Mobility Lab, n.d.).
  • Smart Infrastructure Optimization: Photometrics AI employs geospatial AI to enhance urban lighting systems. By integrating spatial data and AI-driven analytics, it optimizes outdoor lighting to ensure appropriate illumination on streets, sidewalks, crosswalks, and bike lanes while minimizing light pollution in residential areas and natural habitats. This approach not only improves safety and energy efficiency but also supports environmental conservation efforts (EvariLABS, n.d.).

These examples show how spatial logic—such as spatial joins, proximity analysis, and zonal statistics—can drive powerful workflows even when no visualization is involved. In each case, the emphasis shifts from presenting information to enabling analysis and automation. Features are engineered based on where things are, not just what they are. However, once the spatial context is baked into the dataset, the model itself treats location-derived features just like any other numerical or categorical variable.

Using geospatial technology without maps allows organizations to focus on operational efficiency, predictive insights, and automation without the overhead of visualization. In many workflows, the spatial relationships between objects are valuable as data features rather than elements needing human interpretation. By integrating geospatial intelligence directly into machine learning models and decision systems, businesses and governments can act on spatial context faster, at scale, and with greater precision.

To capture these relationships systematically, spatial models like the Dimensionally Extended nine-Intersection Model (DE-9IM) (Clementini & Felice, 1993) provide a critical foundation. In traditional relational databases, connections between records are typically simple—one-to-one, one-to-many, or many-to-many—and must be explicitly designed and maintained. DE-9IM extends this by defining nuanced geometric interactions, such as overlapping, touching, containment, or disjointness, which are implicit in the spatial nature of geographic objects. This significantly reduces the design and maintenance overhead while allowing for much richer, more dynamic spatial relationships to be leveraged in analysis and workflows.

By embedding DE-9IM spatial predicates into machine learning workflows, organizations can extract richer, context-aware features from their data. For example, rather than merely knowing two infrastructure assets are ‘related,’ DE-9IM enables classification of whether one is physically inside a risk zone, adjacent to a hazard, or entirely separate—substantially improving the precision of classification models, risk assessments, and operational planning.

Machine learning and AI systems benefit from the DE-9IM framework by gaining access to structured, machine-readable spatial relationships without requiring manual feature engineering. Instead of inferring spatial context from raw coordinates or designing custom proximity rules, models can directly leverage DE-9IM predicates as input features. This enhances model performance in tasks such as spatial clustering, anomaly detection, and context-aware classification, where the precise nature of spatial interactions often carries critical predictive signals. Integrating DE-9IM into AI pipelines streamlines spatial feature extraction, improves model explainability, and reduces the risk of omitting important spatial dependencies.

Harnessing geospatial intelligence without relying on maps opens up powerful new pathways for innovation, operational excellence, and automation. Whether optimizing infrastructure, improving predictive maintenance, or enriching machine learning models with spatial logic, organizations can leverage these techniques to achieve better outcomes with less overhead. At Cercana Systems, we specialize in helping clients turn geospatial data into actionable insights that drive real-world results. Ready to put geospatial AI to work for you? Contact us today to learn how we can help you modernize and optimize your data-driven workflows.

References

Clementini, E., & Felice, P. D. (1993). A model for representing topological relationships between complex geometric objects. ACM Transactions on Information Systems, 11(2), 161–193. https://doi.org/10.1016/0020-0255(95)00289-8

AltexSoft. (2020). Predictive maintenance: Employing IIoT and machine learning to prevent equipment failures. AltexSoft. https://www.altexsoft.com/blog/predictive-maintenance/

Dash, S. K. (2023, May 10). Crop classification via satellite image time-series and PSETAE deep learning model. Medium. https://medium.com/geoai/crop-classification-via-satellite-image-time-series-and-psetae-deep-learning-model-c685bfb52ce

MIT Urban Mobility Lab. (n.d.). Machine learning for transportation. Massachusetts Institute of Technology. https://mobility.mit.edu/machine-learning

EvariLABS. (2025, April 14). Photometrics AI. https://www.linkedin.com/pulse/what-counts-real-roi-streetlight-owners-operators-photometricsai-vqv7c/

Reflections on the Process of Planning FedGeoDay 2025

What is FedGeoDay?

FedGeoDay is a single-track conference dedicated to federal use-cases of open geospatial ecosystems. The open ecosystems have a wide variety of uses and forms, but largely include anything designed around open data, open source software, and open standards. The main event is a one day commitment and is followed by a day of optional hands-on workshops. 

FedGeoDay has existed for roughly a decade , serving as a day of learning, networking, and collaboration in the Washington, D.C. area. Recently, Cercana Systems president Bill Dollins was invited to join the planning committee, and served as one of the co-chairs for FedGeoDay 2024 and 2025. His hope is that attendees are able to come away with practical examples of how to effectively use open geospatial ecosystems in their jobs. 

Photo courtesy of OpenStreetMap US on LinkedIn.

“Sometimes the discussion around those concepts can be highly technical and even a little esoteric, and that’s not necessarily helpful for someone who’s just got a day job that revolves around solving a problem. Events like this are very helpful in showing practical ways that open software and open data can be used.”

Dollins joined the committee for a multitude of reasons. In this post, we will explore some of his reasons for joining, as well as what he thinks he brings to the table in planning the event and things he has learned from the process. 

Why did you join the committee?

When asked for some of the reasons why he joined the planning committee for FedGeoDay, Dollins indicated that his primary purpose was to give back to a community that has been very helpful and valuable to him throughout his career in a very hands-on way. 

“In my business, I derive a lot of value from open-source software. I use it a lot in the solutions I deliver in my consulting, and when you’re using open-source software you should find a way that works for you to give back to the community that developed it. That can come in a number of ways. That can be contributing code back to the projects that you use to make them better. You can develop documentation for it, you can provide funding, or you can provide education, advocacy, and outreach. Those last three components are a big part of what FedGeoDay does.”

He also says that while being a co-chair of such an impactful event helps him maintain visibility in the community, getting the opportunity to keep his team working skills fresh was important to him, too. 

“For me, also, I’m self-employed. Essentially, I am my team,” said Dollins. “It can be really easy to sit at your desk and deliver things and sort of lose those skills.”

What do you think you brought to the committee?

Dollins has had a long career in the geospatial field and has spent the majority of his time in leadership positions, so he was confident in his ability to contribute in this new form of leadership role. Event planning is a beast of its own, but early on in the more junior roles of his career, the senior leadership around him went out of their way to teach him about project cost management, staffing, and planning agendas. He then was able to take those skills into a partner role at a small contracting firm where he wore every hat he could fit on his head for the next 15 years, including still doing a lot of technical and development work. Following his time there, he had the opportunity to join the C-suite of a private sector SaaS company and was there for six years, really rounding out his leadership experience. 

He felt one thing he was lacking in was experience in community engagement, and event planning is a great way to develop those skills. 

“Luckily, there’s a core group of people who have been planning and organizing these events for several years. They’re generally always happy to get additional help and they’re really encouraging and really patient in showing you the rules of the road, so that’s been beneficial, but my core skills around leadership were what applied most directly. It also didn’t hurt that I’ve worked with geospatial technology for over 30 years and open-source geospatial technology for almost 20, so I understood the community these events serve and the technology they are centered around,” said Dollins.

Photo courtesy of Ran Goldblatt on LinkedIn.

What were some of the hard decisions that had to be made?

Photo Courtesy of Cercana Systems on LinkedIn.

Attendees of FedGeoDay in previous years will likely remember that, in the past, the event has always been free for feds to attend. The planning committee, upon examining the revenue sheets from last year’s event, noted that the single largest unaccounted for cost was the free luncheon. A post-event survey was sent out, and federal attendees largely indicated that they would not take issue with contributing $20 to cover the cost of lunch. However, the landscape of the community changed in a manner most people did not see coming.

“We made the decision last year, and keep in mind the tickets went on sale before the change of administration, so at the time we made the decision last year it looked like a pretty low-risk thing to do,” said Dollins.

Dollins continued to say that while the landscape changes any time the administration changes, even without changing parties in power, this one has been a particularly jarring change. 

“There’s probably a case to be made that we could have bumped up the cost of some of the sponsorships and possibly the industry tickets a little bit and made an attempt to close the gap that way. We’ll have to see what the numbers look like at the end. The most obvious variable cost was the cost of lunches against the free tickets, so it made sense to do last year and we’ll just have to look and see how the numbers play out this year.”**

What have you taken away from this experience?

Dollins says one of the biggest takeaways from the process of helping to plan FedGeoDay has been learning to apply leadership in a different context. Throughout most of his career, he has served as a leader in more traditional team structures with a clearly defined hierarchy and specified roles. When working with a team of volunteers that have their own day jobs to be primarily concerned with, it requires a different approach. 

“Everyone’s got a point of view, everyone’s a professional and generally a peer of yours, and so there’s a lot more dialogue. The other aspect is that it also means everyone else has a day job, so sometimes there’s an important meeting and the one person that you needed to be there couldn’t do it because of that. You have to be able to be a lot more asynchronous in the way you do these things. That’s a good thing to give you a different approach to leadership and team work,” said Dollins on the growth opportunity. 

Dollins has even picked up some new work from his efforts on the planning committee by virtue of getting to work and network with people that weren’t necessarily in his circle beforehand. Though he’s worked in the geospatial field for 30 years and focused heavily on open-source work for 20, he says he felt hidden away from the community in a sense during his time in the private sector. 

Photo courtesy of Lane Goodman on LinkedIn.

“This has helped me get back circulating in the community and to be perceived in a different way. In my previous iterations, I was seen mainly from a technical perspective, and so this has kind of helped me let the community see me in a different capacity, which I think has been beneficial.”

FedGeoDay 2025 has concluded and was a huge success for all involved. Cercana Systems looks forward to continuing to sponsor the event going forward, and Dollins looks forward to continuing to help this impactful event bring the community together in the future. 

Photo courtesy of Cercana Systems on LinkedIn.

**This interview was conducted before FedGeoDay 2025 took place. The event exceeded the attendance levels of FedGeoDay 2024. 

FedGeoDay 2025 Highlights

The Cercana Systems team had a wonderful time attending FedGeoDay 2025 in Washington, D.C.! It was fun to catch up with long-time colleagues, make new professional connections, and learn how a wide array of new projects are contributing to the ever-evolving world of open geospatial ecosystems. 

A standout highlight was the in-depth keynote by Katie Picchione of NASA’s Disasters Program on the critical role played by open geospatial data in disaster response. Additionally, Ryan Burley of GeoSolutions moderated an excellent panel on Open-Source Geospatial Applications for Resilience, and Eddie Pickle of Crunchy Data led an energetic panel on Open Data for Resilience. 

We were especially excited about the “Demystifying AI” panel with panelists Emily Kalda of RGi, Jason Gilman of Element 84, Ran Goldblatt of New Light Technologies, and Jackie Kazil of Bana Solutions which was moderated by Cercana’s president Bill Dollins.

Location is an increasingly important component of cybersecurity and FedGeoDay featured a fireside chat on cybersecurity led by Ashley Fairman of DICE Cyber.  On either side of the lunch break, Wayne Hawkins of RGi moderated a series of informative lightning talks on a range of topics. 

FedGeoDay was a content-rich event that was upbeat from beginning to end. We are grateful to all of the presenters and panelists for taking the time to share their knowledge and to the organizing committee for their work in pulling together such a high-quality event. Cercana is proud to support FedGeoDay and looks forward to continuing to do so for years to come.