With algorithms driving purchases and clicks all over the internet, nationwide facial recognition systems starting to come online in foreign nations, and autonomous vehicles on the horizon, there seems to be shroud of fear surrounding anything having to do with AI. Some of these fears are very legitimate, but many of these fears may simply show a lack of understanding of the subject at hand. This article will explain some concepts and terminology of Machine Learning, Deep Learning, and Artificial Intelligence on a surface level, as well as recommending additional reading for a more in-depth look at this subject.Read More »
I have been a part of three different data collection efforts that collected line geometry and infrastructure of recreational trails. One effort was the development of the Fish and Wildlife Services (FWS) Trails Inventory Program. This effort involved collecting trails data in FWS national wildlife refuges across the United States. The second was an effort to collect trails data on all of the recreational trails at Haleakala National Park in Hawaii. This effort was based upon the FWS trails data collection with a few minor changes to how the data was collected. In the third effort I was responsible for taking the data to its final state and for training field staff employed by the Student Conservation Association.
Based on these experiences, I have come to understand that the keys to a successful data collection effort lie in:
- Flexibility, and
- Very clear goals of what is to be accomplished.
Much of the conceptual background and structure managing a geospatial data model using a platform-independent model (PIM) has been outlined in previous posts. And while it is useful to understand how the PIM is structured in the tables and views inside the PIM database, the critical knowledge relates to the way the PIM API retrieves and organizes this information, and details the properties and methods of the various PIM API objects. So if you are not particularly interested in the conceptual, now would be a good time to pay attention.
This post begins the process of moving from the abstract to the concrete. In the next series of posts, we will begin to build a simple data model in order to not only demonstrate the concepts that have been previously discussed but also to showcase many of the existing tools for helping with various management tasks such as version migrations, script generation and data validation. In this post, we’ll begin discussing the API provides the business rules for interacting with the PIM, enables configuration management activities, and acts as the interface between the PIM and the applications that use it.Read More »
The PIM now contains sets of features (the pimFeature) grouped into sets (the pimConfiguration) with properties and referencing sets of attributes (the pimAttribute). For may GIS solutions, this is all that is required for collecting, validating, and displaying geospatial data.
But the PIM also contains the ability to constrain attribute values in a variety of ways, in much the same way that many of the RDBMS data stores contain a similar ability. Probably the simplest of these ways is the ‘Nulls Allowed’ property. In nearly all cases, attributes will be coded as ‘Nulls Allowed’, but specific user preferences might dictate otherwise, provided the user fully understands the implications of this coding.
This post continues a series describing our approach to using a platform-independent approach to managing data models and standards. In any situation where a geospatial data model is agreed upon and adopted by multiple participants, management of the model itself becomes a significant issue. Over time, modifications to the model are required. Such modification can have significant impacts on the implementations of each participant. As a result, configuration management is needed to ensure smooth transition and traceability between data model versions.
Background: While many organizations see the clear advantages of establishing and using geospatial data standards, obtaining compliance from constituents continues to be a frustrating, and costly, problem. Users are frequently reluctant to adopt a new standard given the potential cost of modifying applications that use an existing, custom schema or simply having a ‘not invented here’ mentality. This was true within the Department of Defense (DoD) Installation Geospatial Information and Services (IGI&S) community, even though a standard had existed from more than 10 years. As a part of 2006 Spatial Data Standards for Facilities, Infrastructure, and Environment (SDSFIE) initiative, Zekiah developed a data standardization concept intended to provide flexibility in schema naming conventions and organization without compromising ease of data sharing, conversion, and information understanding.
Spatial Data Standards for Facilities, Infrastructure and Environment (SDSFIE): Five Fundamental Tools
The Spatial Data Standards for Facilities, Infrastructure and Environment (SDSFIE) is the single Department of Defense spatial data standard that supports common implementation and interoperability for installations, environment, and civil works missions.
SDSFIE is being managed by the Defense Installations Spatial Data Infrastructure (DISDI) Group. The DISDI Group is a formal governance group reporting to the Department of Defense’s Installations & Environment Investment Review Board.
Starting with the 2005 version, Microsoft SQL Server has included new encryption capabilities that all administrators, programmers and database analyst should be aware of. Since then SQL Server has been able to natively support both hashing and encryption. When planning your encryption or hashing solution, first you need to decide if you will be storing an encrypted version of the data or a hashed copy of the data. The difference is that encrypted data can be decrypted, while hashed data cannot be decrypted.Read More »