Loading...

Follow xyHt on Feedspot

Continue with Google
Continue with Facebook
or

Valid
xyHt by Dave Doyle - 4d ago

A basic illustration of Chandler’s Wobble.

Above: The Gaithersburg Latitude Observatory is now a community park.

My tale of locating the latitude pier of the Cincinnati Latitude Observatory, one of six international latitude observatories now defunct but historically significant.

It was 1974, and I’d been working at the National Geodetic Survey (NGS) for two years. One afternoon my boss told me to deliver paperwork to an office we had in Gaithersburg, Maryland, just a few miles away. I’d never heard of this office, but dutifully off I went. When I got there and saw what it was, I could only think, “I want this job!”

Six “Wobble” Observatories

The Wanschaft telescope was used in all six observatories.

The Gaithersburg International Latitude Observatory was just over eight miles from our headquarters in Rockville, Maryland. You really couldn’t call it an office. The entire facility consisted of a small “shack”—the latitude observatory—that contained the astronomic telescope designed specifically for the purpose of observing latitude as well as the house where the observer, Mac Curren, and his family lived.

The observer’s job every clear night was to perform a set of measurements for an assigned set of near-zenith pairs of stars following the Horrebow-Talcott method. This site was one of six similar observatories located around the globe at approximately 39° 08’ north latitude.

All six observatories were observing the same stars each night from their respective locations, and those measurements were used to determine the amount of wobble of the Earth around its axis, typically about .7 arc seconds over 427 days. It’s commonly called the Chandler Wobble, after the American astronomer Seth Carlo Chandler who described the effect in his 1891 signature paper, “On the Variation of Latitude.”

The network of observatories made up the foundation of the International Latitude Service that had been created by the International Geodetic Association in 1895. In addition to Gaithersburg, the network originally included stations located in Cincinnati, Ohio; Ukiah, California; Mizusawa, Japan; Charjui, Turkestan; and Carloforte, Italy.

The Cincinnati Latitude Observatory was built on the grounds of the Mt. Lookout Observatory.

All stations in the network began their observations in the fall of 1899. With some minor differences, all of the sites used the same equipment. The nearly identical meridian telescopes were all designed and constructed by Julius Wanschaff in Berlin, Germany. An excellent description of this subject can be found in the article, “Wandering Pole, Wobbling Grid” by Trudy E. Bell.

Defunct Observatories

With significant improvement of geodetic satellites and other space-based observations, by 1980 the functions of these optical observatories were no longer needed. With the retirement of Mr. Curren, regrettably that job went away.

The facilities in Gaithersburg and Ukiah were turned over to their respective city parks departments, which in both cases have preserved the observatories and created beautiful community parks. In fact, my wife Renee Shields-Doyle and I were married at the Gaithersburg site in June of 2017.

Due to a combination of financial and network geometry considerations, the Cincinnati Latitude Observatory, which had been built on the grounds of the existing Mt. Lookout Observatory, ceased this function in 1915, having participated for only 16 years. After my retirement from NGS in 2013, I continued to have numerous opportunities to travel and stay engaged with the surveying and geographic information communities, including volunteering as a docent for the Gaithersburg observatory.

In October of 2016, my not-yet wife and I paid a visit to her brother and his family who live in the Cincinnati area, and while there we took a few hours to visit the Cincinnati observatory built in 1873 on Mt. Lookout.

Because of its unique location along the 39th parallel of latitude, the facility and staff have had a long and important legacy of collaboration with the U.S. Coast & Geodetic Survey (USC&GS), the predecessor to NGS, independent of the Latitude Observatory function.

During July 18 to October 5, 1881 USC&GS established a separate astronomic latitude and longitude station on the observatory grounds under the direction of Edwin Smith, G.W. Dean, and Cephas Sinclair. The stone pillars that supported their transit telescope still stand behind the observatory’s main building in excellent condition.

Even though this station played an important role in the work of USC&GS to determine the deflection of the vertical for the expanding national triangulation network, the geodetic coordinate values were never published as part of the National Spatial Reference System. Data for this station is published only in USC&GS Special Publication 110, “Astronomic Determinations by United States Coast and Geodetic Survey and other Organizations.” Sadly, when the function of the Cincinnati Latitude Observatory ceased in 1915, the pier (or the stand that the telescope was mounted on), together with the shed protecting it, were razed, and the Wanschaff telescope was transferred to the Smithsonian Institution (never to been seen again; pan to final scene from Raiders of the Lost Ark!).

Planning to Locate the Pier

During our visit, Renee and I had the opportunity to meet with one of the staff members and discuss not only the science history of the observatory but also its participation in the International Latitude Service. While the staff was certainly aware of the function of the station, they had only a general idea of where it had been located.

In the course of the conversation, we suggested an effort be made to recover the exact location of the latitude pier and commemorate its history. While the observatory staff was exceeding supportive of our idea, they cautioned that before a spoonful of dirt could be turned to look for the pier, they needed approval from the administration of the University of Cincinnati (UC) that owns the grounds the observatory sits on.

With efforts begun to connect with the appropriate UC office, I decided it was also important to connect and partner with representatives of the Professional Land Surveyors of Ohio (PLSO). PLSO executive director Melinda Gilpin suggested we contact Gary Nichols and Rose Coors, officers of the PLSO southwest chapter, and Steve Cahill of the Cincinnati chapter. Gary and Rose are also active members of the Surveyors Historical Society (SHS).

In short order we had everyone excited and engaged in planning and coordinating an effort to recover the precise location of the observatory pier. We were all hopeful that when the latitude observatory building and pillar had been taken down, that portion of the pillar that was just below ground level would be left in place.

Chunk!

We hoped it would require only a few phone calls or emails and just a couple of weeks to get the required permission from UC, but the wheels of any bureaucratic organization can sometimes turn slowly.

After two years, on the morning of Monday, October 22, 2018, all the parties convened in the observatory’s conference room. During an hour-long meeting, we explained the historical aspects of the latitude observatory and the general ideas that PLSO and SHS had to recover and commemorate the site. The representatives of the UC Office of Planning, Design and Construction gave their permission for us to begin our recovery effort. Being optimistic about the outcome of the meeting, we had arranged that all the necessary participants be there. The happy band of surveyors quickly moved into action. Several different activities began immediately.

Working with previous computations, Gary and Rose had set a small stake at the coordinates that we believed to be the location of the telescope pillar. Two observers using ground-penetrating radar (GPR), graciously provided by Rob Harris, owner of The Underground Detective, began to methodically scan an area around the stake Gary and Rose had set.

It did not take long for the GPR observations to return exciting images of what appeared to be a 4-foot by 4-foot structure almost exactly where the stake had been set. The GPR team, using white spray paint, outlined an area with the greatest signal return. We were all feeling cautiously optimistic.

I was given the great honor of pushing in the first probe; the tip of the probe rod made that wonderful “chunk” noise just a few inches below the grass, eliciting smiles and a rush that few others than surveyors can appreciate. We knew we had found it.

Several members of the team with small shovels had uncovered the brick foundation of the latitude pier for likely the first time since it had been dismantled and buried sometime in the 1980s.

The author and several team members excitedly uncover the brick foundation.

Final Steps

While this was going on, Steve Cahill of Abercrombie Associates set a dual-frequency GNSS receiver over the center point of the1881 USC&GS astronomic station located about 160 ft north of the latitude observatory pillar, with the intention of completing sufficient observations to submit that station to NGS for inclusion in the OPUS Share Solutions database.

While the station is in a GPS-able location, some data-collection malfunctions prohibited acquiring data of sufficient quantity and quality to submit at that time. Attempts were also made to collect GPS observations at the latitude pillar but were greatly hampered by a large tree just to the south with the same results. Another attempt for both stations will be made by PLSO volunteers during the winter when the leaves are off.

With the missing pier now found, what remains is to construct a commemorative marker and interpretative signage in collaboration with PLSO, UC and the staff of the observatory and to host an appropriate dedication ceremony open to the public.

This effort was the work of many people who all deserve recognition. They include: Craig Niemi, executive director of the Cincinnati Observatory; John Ventre, observatory historian; Lucy Cossentino-Sinnard and John Martini from the UC Office of Planning, Design and Construction; Rob Harris, Carl Goyette, and Erik Vaundry from The Underground Detective; Gary Nichols and Rose Coors, principals of Nichols Surveying; Melinda Gilpin, executive director of PLSO; Steve Cahill from Abercrombie Associates; retired local surveyors Dan Rensing and Lee Nordloh; Renee Shields-Doyle and myself of Base 9 Geodetic Consulting Services.

I am sorry I never got “that job.” It has been a wonder, watching the technology used for surveying to advance in leaps and bounds in the last 50 years as compared with the previous 200, and I wouldn’t have missed a minute of it. Still, I do miss the old days, tracking the stars, sipping “observing fluid,” and marveling at the universe.

The post Search for the Missing Pier appeared first on xyHt.

  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 
xyHt by Marc Delgado, Phd - 6d ago

Above: The cramped field conditions in the underground tunnels are evident in this point cloud Trammel used to create the 3D tunnel piping.

From crowded boiler rooms to hard-to-reach underground tunnels, BIM experts tell us how following a smooth workflow is key to rapidly creating accurate 3D pipe models from point clouds.

Inside the Student Union Center building of the Central New Mexico (CNM) Community College in Albuquerque, New Mexico, Bob Ferguson was up for a new challenge that’s different from what he usually encounters in his daily job in the field. Within a short span of allotted class time, his task was to show students how to create an as-built 3D model of the building’s crammed boiler room.

Rapidly producing an accurate 3D model of pipes is hard work, especially if they twist and turn like noodles in unreachable corners. Thus, the boiler room was a good choice to display the capabilities of automated feature extraction, one of the latest technologies that has modernized the architecture, engineering and construction (AEC) industry in recent years.

Ferguson works as a scanning and virtual construction specialist for Jaynes Corporation, a full-service contractor with AEC projects in New Mexico, Colorado, and Texas. The company, which eagerly incorporated virtual construction technologies early on in its operations and has established an in-house scanning and modeling facility, encourages its employees to engage with local colleges by giving technology teach-ins to students in the communities where they operate, similar to the demonstration that Ferguson gave at CNM.

“I participate in community activities as often as I can, and I enjoy doing it,” he shared when asked why he took part in the demonstration at CNM.

Bob Ferguson

On that day at the boiler room, Ferguson was wearing a teacher’s hat. The technology demonstration project at the architecture/engineering computer drafting program was an opportunity for the students, the future workforce of the AEC industry, to have a glimpse of the different stages required to create a building information model (BIM), starting from using 3D laser scanners for scanning the boiler room to using an automated extraction software for producing a 3D model of the pipes, valves, and other piping elements.

At the end of the activity, not only were the students able to participate in a real-life demonstration of the latest 3D modeling technology being used in the industry, but the registered scan and model were donated to the college’s drafting program which the students can use as future reference in their own projects.

Another beneficiary of the activity was the college facility manager who likewise received the up-to-date as-built model of the boiler room, which can be used in maintaining and possibly upgrading the equipment in the future.

“For me personally, there is no greater reward than the opportunity to share knowledge with students who are coming up through the trades,” Ferguson said. “As others have taught me, this is my chance to pay it back.”

According to him, the entire scanning, extraction, and modeling took less than 20 hours, a reduction of modeling time by about 75% compared to the more traditional workflow that students learn today.

How was he able to create the 3D pipe model of the boiler room in so little time?

A portion of the 3D model from Ferguson’s project; the model was automatically extracted from a point cloud by EdgeWise and imported to Revit.

Seamless Pipe Modeling Workflow

Even if it were only a class demonstration, Ferguson worked with his team to show the students how professional BIM work is undertaken in the field. They followed a seamless workflow to accomplish the entire task, from laser scanning the room to creating the actual BIM.

“We wanted to demonstrate to the students a best-case scenario of exactly what the technology could accomplish,” Ferguson said.

From the start of the workflow process, Ferguson and his team set an extremely high level of modeling detail, a requirement that clients often ask for in real projects. “The resolution needed to be such that the bolt patterns on flanges of the boiler room could be identified and accurately measured,” described Ferguson.

To meet this demanding requirement, they used the FARO Focus S-150 laser scanner which was set to a resolution (1:4) that would capture about 44 million points in a single 360 degree scan and show details less than a quarter of an inch for objects 30 feet from the scanner. They also performed numerous scans, 21 in total, from multiple overlapping perspectives within the 4,000-square-foot boiler room. This is a large volume of data with very precise details.

After using the FARO Scene software to register the individual scans to create a single point cloud mode, Ferguson imported the data into EdgeWise, allowing him to automatically extract the architectural and piping elements of the boiler room. The structural objects were later obtained with EdgeWise’s user-guided semi-automatic extraction tools.

EdgeWise software, a product of the ClearEdge3D company from Virginia, uses computer vision algorithms that automatically extract geometry from point-cloud data. The feature-extraction algorithms and automated-modeling technology of EdgeWise can create walls, pipes, conduits, and steel beams from laser-scan point clouds.

The use of EdgeWise allows for shorter modeling time within a unified workflow from start-to-finish. In Ferguson’s demonstration, once the boiler room elements were extracted in EdgeWise, the final step was to export the model to Revit, Autodesk’s multidisciplinary BIM software. The software was also used to export the architectural elements to Revit as intelligent building components.

The workflow, from start-to-finish, that Ferguson followed to create the BIM of the CNM’s boiler room.

In this workflow, Trammell’s team first measured the pipe dimensions before scanning and creating the BIM of the hospital’s underground piping tunnels.

EdgeWise has a strong integration with Revit that allows it to transfer as-built models to Revit, which eliminates the need for manual modeling of pipes, conduit, and other elements in Revit, thus saving time and money. Ferguson estimated that had the project been attempted manually, the entire process would have required 80 hours, mostly to map, identify, and align each pipe run to the point cloud.

“EdgeWise saved us a tremendous amount of time [thanks to] the efficiency of the software and the technical support from ClearEdge3D,” said Ferguson.

Another time-saving EdgeWise function that Ferguson mentioned is the Remainder Cloud function. With it, modelers can extract, select, and isolate a single feature, such as a pipe, to create lightweight point clouds. By significantly decreasing the whole size of the project file, you can then quickly and easily measure the isolated pipe point cloud, trace its route from supply to boiler, and determine if it collides with other systems.

“The isolated point cloud greatly reduces the entire file size of the project, in turn, increasing the performance of the users authoring or viewing the model,” shares Ferguson. “This makes the modeling workflow faster and more efficient.”

Kelly Cone

Kelly Cone, ClearEdge3D’s vice president of industry strategy, told us that their EdgeWise customers are reaping the advantages of current advances in computer vision technology.

“Computer vision technology gives our customers a massive head start on their as-built model. Our piping module, for instance, can usually extract 70 to 80% of the piping and auto-connect it into contiguous systems. So, if you’ve got 1,000 existing pipes (or conduit, or round duct) to model, wouldn’t you rather start your modeling with 750 of them already done for you with only a few hours of QA/QC to review them?” Kelly said.

ClearEdge3D was founded in 2006. The company’s track record of technological innovations in automated feature extraction, laser scanning, and 3D modeling space has been growing ever since they first released EdgeWise in 2009. The founders are recognized experts in the field of computer vision, hyperspectral data analysis, and lidar data analysis.

In 2018, Topcon acquired all the outstanding shares of ClearEdge3D. “It’s a great opportunity to combine the EdgeWise and Verity software platforms with the Topcon sales and financial resources to better serve all of our customers,” said its founder and CTO, Kevin Williams, in a press release.

Underground Pipe Modeling

John Trammell

Another successful application of automated point cloud feature extraction in 3D pipe modeling was demonstrated by John Trammell, Brandt Company’s industrial virtual design manager. He had the same positive experience after using EdgeWise when the company was contracted to scan and create a 3D model of the underground pipes and structural steel in tight tunnels below a hospital in Texas.

Trammell and his team likewise followed a comparable modeling workflow to accomplish the tasks. Using a FARO Focus S-350 laser scanner, they captured 35 scans in the two dark underground tunnels, each of which measured about 6 feet wide and 350 feet long. And to make sure that the individual scans could be precisely registered into a single point cloud, the team performed target registration, ensuring that three or four targets were visible from each scan location. They also performed additional manual measurements of the pipe depth and diameter.

“We learned that spending time onsite to measure the thickness of the pipe insulation as well as pipe diameter ahead of the scanning saved a ton of time during the modeling process,” said Trammell.

The laser point clouds were then registered in FARO Scene before they were loaded into EdgeWise to automatically extract the underground pipes and model the valves, fittings and structural steel beams.

During EdgeWise’s automated pipe extraction process, the diameter, elevation, and slope of the piping systems are accurately identified. Then, the automatic connection tools in EdgeWise allowed the team to trace the underground piperuns even when they passed behind obstructions. “Sometimes real-world conditions don’t allow pipes to be hung in perfect 45- or 90-degree angles,” said Trammell. “EdgeWise lets you convert pipes with their real geometries—even if it’s an 88-degree turn.”

According to Trammell, the entire scanning and modeling process was completed in just under two weeks. But if the entire process were done manually, it would have taken eight weeks to deliver the work, reaching 80% of time saved. Trammell adds that the user interface of EdgeWise and its simplicity makes for a delightful user experience. “ClearEdge3D is allowing Brandt Companies to leverage construction technology in existing facilities in ways that have never been possible in the past,” he said.

Build Better Pipe Models

One of the underground piping segments in Trammell’s project, created using EdgeWise’s automated feature extraction technology.

Ferguson’s nine years of experience in virtual construction and scanning not only makes him a natural fit to conduct technology demonstrations in colleges; he is also an expert on process piping and scanning projects using BIM and virtual design and construction, with an AGC Certificate of Management-Building Information Modeling (CM-BIM) credential under his belt.

What pointers did he give the students after the demonstration? For Ferguson, who believes that every project has multiple challenges, his advice to them was “to always make sure that the derived as-built model meets the needs of the client.”

During the demonstration, Ferguson also recommended that they first answer these questions before starting any scanning and modeling project: • What are the scans being used for? • What elements do you need to see? • What is the level of detail needed? • What deliverables are expected? The following are Ferguson’s key takeaways for the students of CNM to prepare them in the future when it will be their turn to create their own 3D pipe models. Current BIM practitioners may heed them, too.

Ensure good coverage when scanning. The automated feature extraction algorithms of EdgeWise can work to its best ability when elements are scanned in the field from at least two opposing lines of sight.

According to Ferguson, doing this will “increase the coverage of the element by the laser, thus providing more points for the software to accurately interpret the geometry and extract the pipe elements faithfully.”

Understand your tools. For Ferguson, it is just as important to understand “what your tools cannot do as well as what they can do.” This will help with the expectations of your clients with regards to what they will be receiving as a deliverable, as well as help potential clients understand the capabilities of the technologies.

“Reality capture is just that, a snapshot of the exact existing conditions,” Ferguson shared. “Educating your clients that what they are going to receive will reflect ‘as exists’ conditions will help them greatly when adopting point clouds and the models derived from them into their workflow.”

Contact the creators of the tools you use and learn best practices from them. For Ferguson, there are almost as many reliable software options out there as there are people using them. But what separates the platforms from one another is the support offered by their technical staff and software engineers.

“Don’t hesitate to reach out to them,” he said. “You can almost be guaranteed that they have heard your question before and have if not one, multiple solutions. Yet the staff at ClearEdge3D have consistently responded in a timely manner and gone above and beyond to help me with any issue I have brought to the table.”

The Future Is with AI

According to a report published by Zion Market Research in 2018, the global BIM software and services market will be expected to grow in the coming years, reaching USD 10.36 billion by 2022. Authors of the report also predict that the growth in the BIM market is due to the improved visualization, the increased productivity, and the dramatic reduction in delivery time and costs with the use of BIM.

The time and cost savings can clearly be attributed to the use of automated feature extraction in the BIM process, thus making it commercially viable to turn large amounts of point-cloud data into usable 3D models. More businesses will adopt BIM when other opportunities open, such as fewer manual inputs required from users and, hopefully, lower-priced instruments.

What’s in store for customers in the future? Will we also see upcoming improvements in automated feature extraction technology? We asked again Kelly Cone of ClearEdge3D about what to expect with computer vision and how their products will evolve in the coming years.

“Computer vision, like all artificial intelligence-related fields, is seeing amazing benefits from developments in machine learning and the general advancement of processing power over time. So, approaches that were simply too computationally expensive or too complex before from a training perspective are now much more achievable,” Cone said.

Clearly, as the demand for BIM products continues to improve and grow, innovations in hardware will be required in order to meet the increasing computational demands of computer vision technologies. To overcome this limitation, Trammell, for example, used a dedicated computer solely for data processing due to the big file size of the point cloud in his hospital tunnel project. BIM operators make do with currently available hardware to create their as-built 3D models.

Cone believes in another couple of years we’ll see more sophisticated and accurate solutions due to the availability of more advanced AI approaches. “We will see an expansion in what we can realistically achieve with AI in terms of scope. Problems that might have been too complicated for more traditional methods might not be so difficult using machine learning.”

Another important issue is AI’s “black box” problem. Due to its nature, the use of automation creates a black box for the BIM user where data goes in the computer and then comes out as processed information. Whatever happens in between is not clear both to the user and the client. That is why Cone maintains that it is more interesting to approach AI as an interactive and assistive tool, with less “black box” and more “part of your team” structure, with clearer workflows.

“It is exciting for us to see these discussions on AI enter the mainstream over the last few years. The increased awareness we see in the AEC market will drive innovation and investment and that’s good for those of us already working to bring these kinds of solutions to this industry!” Cone shared.

And that is exactly what Ferguson is doing, bringing the technology to the mainstream through his participation in teaching demonstrations. He is clearly engaging with the industry’s future users, the students, and snowballing the awareness of BIM’s usefulness with the public. He hopes this will form a positive feedback loop that will encourage improvements in existing automated feature extraction software and services.

When asked if he will continue to give technology demonstrations in community colleges, Ferguson had this to say: “Being able to explain your workflow to others who have never been exposed or have little exposure to what you are doing actually helps me do my job better.”

The post Piping Hot Tech appeared first on xyHt.

  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Your input is requested. Surveying professionals and practitioners are invited to weigh in on matters of surveying and geomatics education, succession planning, branding, promotion, and outreach.

The 27th  national biennial meeting of the Surveying and Geomatics Educators Society (SaGES) will be held August 4th – August 8th, 2019 at Nicholls State University, in Thibodaux, Louisiana (near New Orleans).

Co-hosted by SaGES and the geomatics program of Nicholls State University, this biennial event has become a lively forum for matters of formal surveying and geomatics education, but has grown to cover broader challenges—and opportunities—for the profession. I’ve attended a number of SaGES events and that the broad ranging dialogue represents “front line” facets of the future of surveying. Here’s a few topics to consider:

  • How can the profession keep up with the rising demand for surveyors amid the current uptick in infrastructure development?
  • How has mentoring evolved in the age of automation?
  • How can the profession preserve the critical fundamentals of surveying and also prepare surveying and geomatics students for the evolving workplace?
  • How can surveying grow a brand that informs and educates our clientele, growing markets, and the general public about the valuable and critical work that we do?

This is your opportunity to provide some direct input to the front line of the future of the profession. Consider sending a someone from your state society or local chapter or have said society sponsor (if they have not already done so) the attendance of an educator (or educators) and/or student(s) from your local surveying program or attend yourself (you’ve always wanted to visit New Orleans). And the event can count towards professional development hours (PDH).

We see great ideas for the future of surveying (Ok, and occasionally some not-so-great ideas) being discussed on surveying forums and social media, so how about bring those ideas and feedback to a national organization that is in the position to act on ideas? And if you can’t attend, please send your input and ideas to SaGES via their website.

The SaGES 2019 agenda and registration can be found at the conference website.

The post Help Shape the Future of Surveying – SaGES 2019 appeared first on xyHt.

  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 
xyHt by Eric Van Reese - 1w ago

Above: The forests of Puerto Rico are second-growth, making for useful modeling.

After hurricane deforestation in Puerto Rico, NASA scientists study forest regrowth using high-resolution, multi-sensor image data.

The Biospheric Laboratory of the NASA Goddard Space Flight Center is dedicated to increasing knowledge about the Earth’s terrestrial ecosystem. One group of scientists there has been studying rates of forest regrowth following harvesting or natural disasters. These forest regrowth rates depend on local factors such as climate, elevation, and soil types.

By collecting ground and remotely sensed observations across a gradient of factors, researchers can begin to determine whether tropical forests will continue to be a significant carbon sink in the future.

The Island of Puerto Rico makes an interesting research area for forest recovery modeling, as it was completely deforested during the middle part of the 20th century. The forest that is now present in Puerto Rico is a regrown, second-growth forest. The geography of the island is diverse, as there are many differences in soil types, elevation, and climates.

During 2017, a project funded by the U.S. Department of Energy was initiated to study tropical forest regrowth rates. Additionally, ecologists were interested in the role of tropical forests in terms of sequestering carbon. The project consisted of forest data acquisition through airborne lidar and image data, as well as ground measurements and terrestrial lidar scanning.

A 3D lidar image of an African tulip tree, mapped as a highly identifiable tree and fast re-growth colonizer.

The African Tulip Tree

Spathodea campanulata, or African tulip tree, is a non-native tree species with orange-colored flowers that occupies much of the island. It is an effective colonizer following disturbance and abandonment of agricultural land. The researchers were interested in producing maps of the island that showed where the tree can be found and how much forest canopy volume it represents.

NASA scientist Dr. Ian Paynter explained that the motivation for this study is to “use a highly identifiable species to understand the ecological significance of the colonization of particular regions over others, after a major disturbance such as the 2017 hurricanes. The resulting expansion of the African tulip tree is controlled by the varying levels of hurricane damage and other underlying factors such as land-use history and soil type.”

In order to produce this African tulip map, both aerial and terrestrial data were collected. For this project, scientists chose to use a combination of airborne lidar and optical image data to characterize the tree canopy, while ground observations and a terrestrial laser scanner were used for calibration and validation. The project data is available for download from NASA through a web data portal.

Approximately 50 flight hours were carried out over a week during 2017, which resulted in 20 terabytes of raw data. The same areas were reflown during 2018, following hurricanes Irma and Maria. An interactive map available on the G-LiHT (see below) website allows users to view and download data. Transects were flown across the island to cross ecological, environmental, and elevation gradients.

Dr. Bruce Cook checks the G-LiHT instrument port underneath the aircraft in Puerto Rico.

G-LiHT

The flights were carried out using an instrument package called G-LiHT, which stands for Goddard’s Lidar, Hyperspectral and Thermal airborne imager. It consists of two scanning lidars; a Phase One Industrial aerial camera; a Hyperspec VNIR imaging spectrometer; a fine-resolution red-edge imaging spectrometer; VNIR and red-edge solar irradiance spectrometers; a thermal infrared camera; and a precision GPS-INS. These components are all commercially available instruments that provide the stability and reliability needed to take measurements on a daily basis over a long time period.

G-LiHT is a portable, airborne imaging system that simultaneously maps the composition, structure, and function of terrestrial ecosystems using lidar, imaging spectroscopy, and thermal instruments, which enables data-fusion studies by providing coincident data in time and space and provides fine-scale (<1 m) observations over the large areas that are needed in many ecosystem studies.

G-LiHT uses lidar to provide 3D information about the spatial distribution of canopy elements, VNIR (visible to near infrared) imaging spectroscopy to discern species composition, and variations in biophysical variables and thermal measurements to delineate wetlands and to detect heat and moisture stress in vegetation.

Dr. Bruce Cook from NASA explains why a combination of several different sensors were used: “Each sensor provides a little bit of different information and context for the things we’re interested in, such as species, cover of forest vegetation, their health and productivity. We can better characterize forest ecosystems with multiple sensors as opposed to one killer sensor that does everything. For example, we merged fine-resolution imagery from the Phase One camera with lidar data to quantify forest canopy volume that is occupied by Spathodea.”

Aerial Camera

The Phase One aerial camera offers a 100MP resolution, with a cross-track coverage of 11,608 pixels. It is equipped with CMOS sensors enabling very short exposure time of up to 1/2000 sec. Its high noise resistance feature is effective at low light.

The G-LiHT system acquired data for 250 hours in 2017, covering 11 U.S. states.

For this project, the Phase One camera was used to produce RGB photographs for identification of fine-scale canopy features. The camera provided the opportunity to develop new algorithmic approaches that can handle large volumes of high-resolution imagery.

Additionally, the G-LiHT system uses a Riegl airborne laser scanner often used for large-scale mapping. For this project the researchers flew relatively low, which enabled the team to capture data from under the clouds and to obtain as much detail as possible about the ecosystems that were monitored. The airborne data was captured during the dry season when conditions were supposed to be most favorable, meaning clear skies and no precipitation.

One of the most valuable fusions of data turned out to be the lidar returns in combination with the fine-resolution aerial imagery, says Cook. “The Phase One imagery enabled identification of the individual Spathodea flowers, whereas the lidar data enabled the quantification of the tree volume. Combining both data sources made it possible to look for individual flowers as we flew over them at 200 mph.”

The airborne data had a finer resolution than satellite imagery of the same area so that observations on the ground could be scaled to airborne and satellite data. Moderate- and coarse-resolution satellite data provides a wide-area coverage at greater temporal frequency, but often with less spatial detail.

Developing a Classification Algorithm

The aerial camera resulted in large data volumes, exceeding those of any of the other instruments used for the project. In order to be able to analyze all of the data and see where the Spathodea trees are located, NASA developed a classification algorithm that would automate locating the orange flowers in the aerial imagery. This was possible due to the fact that the camera produced fine-resolution images (~3 cm) where individual flowers could be found in thousands of pixels. This eliminated the need to unmix spectral information contained in coarser-resolution hyperspectral data.

Terrestrial Scanning

The researchers took measurements on the ground to get a better view of what’s present there and to deal with edge cases that are found in the aerial imagery.

Ground observations provide critical calibration and validation data for every NASA campaign, says Cook. “When you’re collecting observations from an aircraft, you can measure the size of a canopy, but you can’t really determine what’s underneath that canopy or how big a stem or trunk volume is. Those are really best measured on the ground. Also, structural data from airborne and terrestrial lidar can be combined to obtain a more complete view of individual trees.”

The final classification algorithm NASA developed was able to identify flowers under different environmental conditions and enabled the researchers to perform their analysis over landscape- and regional-scales.

Classifying Forest Conditions

Digitized high-resolution aerial imagery and classification techniques open up new territory for NASA, says Cook. “Identifying trees that may be in decline is important for forest health studies. This project has been an excellent testbed for making tools for detecting insect and disease outbreaks. Being able to characterize forest health conditions over large areas is important for ecosystem monitoring.”

Post-hurricane Work

The African tulip tree flower.

When hurricanes Irma and Maria hit Puerto Rico during 2017, the NASA project took a different turn. An emergency situation followed where many people were left without shelter or electricity and with damaged roads and bridges and a shortage of food, healthcare, and fuel.

When the research continued five months later, the same areas were reflown to evaluate the devastating effects of the massive storms on Puerto Rico’s ecosystems, as well as the forest’s gradual recovery.

Dr. Ian Paynter says, “What interested us is finding out where there was more resilience in the landscape, what tree species were most impacted, and what’s going to grow back in the future.”

In additions to aerial data capture, terrestrial laser scanning provided further insights. For example, the understory of the forests had also changed a lot because of the hurricanes. However, accessibility of certain sites proved to be an issue due to damage to the infrastructure on the island.

“The terrestrial laser scanning sites were difficult to access post-hurricane due to fallen trees and landslides. In the terrestrial laser scanning data, we observed a great deal of canopy defoliation, branch loss, and a higher rate of tree mortality than you would normally see,” says Paynter.

There is still work to be done assessing the magnitude and extent of post-hurricane damage. Cook says, “Immediately following the hurricanes we were able to assess mortality and loss of canopy foliage, but future monitoring will be needed to determine the long-term survival of individual trees.

While some trees may still have leaves, there may not be enough canopy remaining to ensure their survival. We have to wait and see if some of those trees that were damaged and lost branches or leaves will survive in the future. Some of what happened as the result of the hurricane won’t be manifest on the landscape until decades from now.”

The post Many Sensors for a Tulip Tree Map appeared first on xyHt.

  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 
xyHt by Jeff Salmon - 1w ago

So, here’s my idea: as I enter semi-retirement I’m looking for additional revenue streams, ironically to replace clients who are also entering the retirement phase. As long-time readers of Pangaea will note, I’m pretty excited about the possibilities of UAS, so it’s no surprise that I might consider expanding my side business in that area. However, starting any business is fraught with pitfalls; starting a UAS-service business is even more so considering the amount of regulation involved.

Given the complex nature of such an enterprise, I’ve been examining the different aspects and taking some incremental steps by doing “sandbox” experiments using a lower-end, but still powerful, drone that I already own: the DJI Spark. Now, before we go any further I must state unequivocally that this platform is absolutely not in any way, shape, or form a serious mapping UAS. This is just for practice before I make a serious investment in a professional-grade UAS like those you see gracing the pages of xyHt (plug, plug).

Moreover, my approach to a possible UAS-service business is more aligned with capturing marketing imagery for my existing land-development and home-builder clients, not necessarily mapping. Don’t worry, I’ll get to UAS mapping soon enough; the October and November issues will be devoted to advances in that area as I’ll be attending both the Pix4D User Conference and the Commercial UAV Expo this fall.

Now that you know the backstory, let’s get back to the Spark. This unit, while not a serious contender for professional-grade mapping, does have features that make it a suitable entry-level visualization tool. Unlike my previous drones, it does have a decent camera (12 MP) mounted on a two-axis gimbal (three would be better, but like I said, entry-level). It also has dual band GNSS (GPS and GLONASS), which is critical. For the full details on its specifications, look here.

Additionally the operational app has interesting and useful photography features pre-programmed into the software. Features like circle, helix, and a variety of panoramas (including, horizontal, 180o and spherical) seem particularly suited for the work I’m interested in. Circle and helix could be applied to individual home-site photography while the panorama features could be employed to illustrate a subdivision with an impressive aerial view.

Getting your kit together. Before I went into the field—practicing as a recreational flyer, not a commercial operator—I knew I would need addition pieces of gear. The unit doesn’t come with a Micro-SD card; I picked up a pair of 64GB ones at Costco cheap. Make sure it’s UHS-I rated and don’t bother getting anything bigger than 64GB.

Also, an extra battery is a must. The Spark maximum flight time (under ideal conditions) is only 16 minutes. As a noob, I typically use up a good portion of that just exploring the control features.

Another item I invested in after a few flights was an additional battery charger. Out of the box the only way to charge the battery was in the drone itself. An additional charger is a big time-saver.

Having a case or backpack to carry the drone and assorted necessities is another must. I bought an inexpensive aluminum case that has plenty of room for the UAS, controller, batteries, charger, manual, small toolkit, and other handy items.

Getting off the ground. First off, I registered my Spark with the FAA; it’s cheap and easy. Once I had read the instruction manual thoroughly and watched how-to videos, I felt ready to take off.

At a nearby field I set up the Spark and was immediately confronted with the “Warning: No-fly Zone” message. Thanks to geo-fencing, the unit found that the nearby County Sheriff’s Department counts as a prison and has a five-mile exclusion zone. I downloaded an app called Drone Buddy that provides operators with critical info like no-fly zones, along with wind speed and weather conditions. I was amazed at the number of local airstrips that count as airports in my area.

Looking to the west, I found fewer flight restrictions, and so off to the mountains I went.

In part two I’ll talk about my further adventures (and misadventures), and I should have imagery to share.

This article appeared in xyHt‘s e-newsletter, Pangaea. We email it twice a month, and it covers a variety of unusual geospatial topics in a conversational tone. You’re welcome to subscribe to the e-newsletter here. (You’ll also receive the once-monthly Field Notes newsletter with your subscription.)

The post Adventures in Droning: Part One appeared first on xyHt.

  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 
xyHt by Anthony Whitlock - 1w ago
Are you doing it right?

In today’s world, survey construction staking is a major service being provided by most companies within the land surveying and civil engineering industry. Throughout my career, like many of you, I have been employed by a handful of different land surveying and civil engineering firms, and with each firm came a different style of dedicated procedural standards, and/or, often, a different business model entirely.

My short decade or so of experience has been enough to allow me to identify what I believe to be the three most common, if not only, existing business models related to construction staking that I believe should be highlighted and openly discussed.

Here I review these business models, detail why the companies have chosen to implement them, and give you my personal conclusion as to which model is the best for the surveying profession and why.

Model #1:  The Dummy Model

Model #1 is a business model that’s designed solely to protect the company from as much risk of error as possible.

Under this model, the company’s survey office personnel are responsible for any and all decisions regarding what is staked for construction in the field. They calculate (calc) the points or alignments that are to be staked in the field based upon the civil engineering or architectural design plans and review it with the field crews on the morning the field work is scheduled to be performed.

Typically the field crews will rely solely upon what was told to them by the office personnel and will never even open the construction plan sets in order to confirm that what they are staking is actually correct.

Per company policy, the field crews are only allowed to stake what is calc›d for them. If they run into a situation that they believe requires some sort of change or adjustment, they must call the office, explain the situation, and await approval before making any changes to the calcs.

The reason that I call this “the dummy model” is because it takes away all critical thinking from the party chief and/or field crews. The crews are not expected to make decisions and are forced to rely upon someone else to do what would normally be a major part of their responsibilities.

In my opinion, this type of policy creates more of the infamous “button pushers” and really puts a halt to the field crew’s career growth and skill development.

While I truly do feel this way, I also understand and appreciate why companies implement it into their practices. Without doing any research on the matter, I think it is safe to say that the leading cause of lawsuits or back-charges against a surveying firm is due to errors made related to the construction staking layout.

This particular business model just removes another layer of risk out of the company’s services, further protecting them from possible errors and costly mistakes. As most of us know, a mistake related to construction staking doesn’t have to be big for it to be expensive.

Model #2: We Are Not Engineers

Model #2 has a lot of similarity to but is very different from the first model.

In this model each and every party chief at the company is responsible for the stakes that they put into the ground.

While office personnel are still creating the calcs for staking construction projects, the field crews are responsible for ensuring to the best of their abilities that the calcs are correct. They are responsible for doing quick reviews of the applicable plan sheets as well as ensuring the proposed tie-in areas will fit existing field conditions (i.e. grading, existing manhole inverts, existing curb and gutter, etc.).

Every field crew is mandated to have a set of the project construction plans with them at all times. The crews are considered to be the last line of defense for the company and are responsible for reviewing the plans prior to arriving on site. This ensures the crews become familiar with the proposed designs and helps to make any possible errors made by the office personnel more identifiable to the field crews.

Any time a field crew is staking points created by office personnel that are designed to tie into an existing structure, check shots are to be taken on the existing feature to ensure that the design elevations/locations match or tie-in correctly. If a discrepancy is found between the design location and the actual position, the party chief will contact the office and let them know of the issue.

Once the office personnel confirm that the issue does exist and that no documentation has been provided to them about a correction, they will place a call to the engineering firm to discuss. This is where I came up with model #2’s name, “We are not engineers.”

Although almost any party chief could fix many of these issues on the spot in the field, the company does not want to take on the risk of making a mistake or overlooking a specific detail that the engineers had in mind when creating their design. More important than the possibility of making errors is the ethical factor that surveyors have no right to adjust an engineer’s design without some kind of consent from them, just as no construction company has the right to change a surveyor’s stakes without the same.

If a surveyor does make an adjustment and it turns out to be wrong for any reason, the cost associated with fixing what has been constructed based on the survey stakes in the field will fall on the surveyors, and the company will be held liable.

Model #3:  Make It Work

Model #3 is essentially the same as model #2 minus the engineering license concerns.

The priority of this business model is to keep construction projects moving forward, thus keeping the clients satisfied and happy.

An owner or manager of a surveying firm who implements this particular model wants the party chief on site to make the decision themselves should some kind of issue arise between design and reality. If the party chief can make a decision to adjust a design in order to keep the project moving forward without issue—and the risks are minimal—then the party chief should do so on their own.

This practice, when done correctly, can actually be quite successful. Clientele begin to feel confident with the survey work being performed and they begin to notice that they rarely have to deal with delays or problems that otherwise normally arise.

When no mistakes are made, all parties involved (engineers, surveyors, contractors) remain happy and in good spirits because things run smoothly.

However, when the party chief is wrong, you better believe that the surveying company is going to incur all of the costs that arise because of it.

While I understand the reasoning behind this particular business model, I can’t help but question some of the professional ethics behind it as well as the risk-versus-reward balancing act in the long run.

What’s Best?

All three business models have advantages and disadvantages to the others, and each may work better for one company’s operations over another’s. I am sure that your own experiences may differ from mine and that your opinions may also do the same, but please keep in mind that my intent is only to convey to you which model I believe is best, not for a company, but for the surveying profession as a whole.

Model #2, “We are not engineers,” is what I believe to be the best model for our profession. It helps ensure that our younger generations of surveyors know how to take responsibility for the work that they perform while also teaching them what actions and liabilities may be out of a surveyor’s professional purview. Over time, they learn how to run a project from start to finish and how to perform under pressure from clients, contractors, deadlines, and engineers.

This model is the key to building a stronger generation of future professionals and solidifying a sturdy foundation of principles within them.

From a business standpoint, I believe that mistakes will be made; it is simply inevitable. However, maintaining a near-flawless balance between limiting risk and building the best new generation of professionals should be a goal of every company and every surveyor. I believe that model #2 will yield you the best return on the major investment you have made (and continue to make) in your business and in your employees.

The post Construction Staking appeared first on xyHt.

  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 
xyHt by Scott Martin - 2w ago

Last month I wrote of two “Ah Ha” moments early in my career: situations where the “light came on” from a surveying experience, even if the value of the moment wasn’t fully understood at the time. In this issue, I share a couple more: where a practical application cemented abstract concepts and where a lack of knowledge resulted in the undermining of sound fundamentals.

Moment #3

In the summer of 1984, while working for a local utility company, I was fortunate to be selected for a summer-long assignment on a topographic mapping crew. I was chosen because of my prior drafting experience, not because of my surveying knowledge, which was very much still “under construction.”

Our job was to spend the summer doing topographic mapping of lakeside areas along three lakes in the El Dorado National Forest for future campground and trail developments. The utility company operated all three lakes as part of their hydro-electric power system and maintained a private campground for employees and retirees at the largest of the three lakes.

One of the sites we surveyed that became a campground.

That campground would be our weekday home for the duration, which by itself was pretty grand for a 25-year-old outdoor enthusiast. Each campsite had electricity and water. The community area had an ice maker, hot showers, and horseshoe pits. I was in heaven and getting paid for it!

The party chief was a seasoned surveyor who had worked for many years for the California Beaches and Parks Department doing mapping. The instrument man was a journeyman surveyor with private- and public-sector experience. The draftsman was just a guy loving life in the mountains.

A drafting machine.

We were to use stadia techniques to collect the data and draft the map in the field in real-time. I was equipped with a drafting board on a tripod, with a drafting arm attached. Pencil-on mylar was the medium. Although I had plotted topographic surveys from field notes using a drafting arm, I had never done it like this. I had never seen stadia measurement used, either. It was tough on the back, but I got used to it. I also learned something on the first full day of mapping—you can still get sunburned through a tee shirt!

At each site, the steps in the process were roughly the same. The crafty party chief would walk the site limits, wandering through the conifer forest and granite outcrops to, and along, the shoreline, planning his “attack,” tapping in lath along the way. Those lath became local control locations, chosen for maximum 3D mapping potential and intervisibility.

Once the control was set, based on assumed coordinates, assumed vertical datum, and assumed meridian, the scale at which each site was to be mapped was determined. Some sites were an acre or less. Others were several acres. The amount of features to be mapped also helped to determine the appropriate mapping scale.

Once that was determined, the first mylar sheet was set up on the board, the control plotted, and off we went.

Watching the old timer run the rod was a thing of beauty. He danced through the forest with tactical perfection, selecting locations to take shots that would often yield multiple feature locations. He would call back what the initial shot depicted, such as “face of 16” fir.” Then he would call “from the shot, plus 6.3’, 5.2’ left (90° offset looking from the instrument) 12” fir.” Sometimes there were several “calls” from a single shot. As he was calling out the features, the I-man was giving me the angle right and distance, which I quickly plotted using the drafting arm after having set the vernier to zero on the backsite.

As each day progressed, the graphical depiction of the physical site was coming alive on the mylar. It was my first experience with “field-to-finish,” except we never left the field. Once all of the planimetric features were mapped from a set-up, the “chasing of contours” began.

Again, the experience and skill of our “rodman” was something to behold. His eye for staying on a constant elevation in undulating terrain was uncanny. Rarely did the I-man have to adjust him up or down more than a couple of tenths of a foot. As he collected, I connected, smoothing in the contours as we went, stopping at the edge of a large granite outcrop, then picking up on the other side and chasing the contour from there. Once the chief reached the limit of mapping from that setup, he would tap in a lath for reference to make sure he knew how far to go from the next set-up.

It was the ultimate training in 3D mapping for me. It is often said that draftsman should go in the field and collect topo to get better at drafting them, and field hands should take a crack at plotting topo from their own notes. Both would benefit from walking in the shoes of the other. I was doing it all at the same time. If there were “holes,” we could see them and fill them. If something looked odd, we would fix it by adding an intermediate shot or doing some checks. Reading stadia distances all day long is tough on the eyes, and mistakes were made, but they were caught and fixed onsite.

We repeated this process day after day, site after site, for almost two months. We would break out our trout rods at lunch and after work before heading back to camp. I often went stream fishing in the evenings, or pitched horseshoes with the other “campers.” It was the best surveying gig I have ever had.

But most importantly, I developed “3D vision” that summer that aided me immensely in both the field and office from that summer forward.

Moment #4

Fast forward to about 1995. I had learned a lot in those 10 years. I obtained my license in 1987 and started to immediately work at the professional level. I oversaw and processed the work for field crews, ran crews, planned work, and eventually performed boundary analysis and signed and sealed the deliverables. I had developed a solid foundation with the help of mentors and from being “thrown” into the fire many times.

I was now working as a party chief for a large state agency and was introduced to GPS technology for the first time. Although I was given formal training, both field and office, these magic boxes were still very foreign to me, but I was assured that they were the best thing to ever come along for surveyors.

This GPS unit (aka magic box) was for sale back in 2015.

Armed with two magic boxes, I was assigned the task of setting and controlling the aerial targets for a large-scale topographic mapping project for a river restoration program. With a few quad sheets, a pile of NGS datasheets, some old field notes, and a field hand, off I went to the site 200 miles from the office.

Using the quad sheets and datasheets provided by the office, we recovered horizontal and vertical control along the mapping route and set targets along the way. That phase took a couple of weeks.

Then the field observations began. We would set up a base on a control point, then take the rover and do static observations on the surrounding targets and other recovered control. We would then move the base and repeat, without tying any of the points located from the previous base location.

Each night I would process baselines in my motel room to make sure the data had been collected without a hitch. As the work progressed, I watched asterisk after asterisk appear on the computer screen as baselines were processed. No asterisk was connected to another, but we were sure covering a lot of ground quickly compared to traversing and running levels.

We completed the field observations and headed home. I proudly turned in my floppy disks to the office guy for checking, final processing, and adjustment. That was on Thursday. He processed on Friday and I returned to work on Monday, anxiously awaiting my “attaboy” for kicking behind.

Instead, when I asked how it looked, these are the words I heard: “That is the worst survey I have EVER seen.”

What? I got in his face almost immediately, citing my resume, my license, my years of experience, and reputation for quality.

He backed up and calmly said, “Don’t take this personal. Let me explain why.”

In a matter of minutes, it became painfully apparent that he was correct. No checks. No redundancy. No interconnected control from one base location to another. Nothing. Oh, and a mixed hodgepodge of control from different level runs, different datums, different eras. A total MESS.

Despite my solid foundation gained from years of terrestrial surveying, the magic boxes had made me lose my mind. They had drained me of every bit of sound surveying practice I had ever learned. I was embarrassed, mad, humiliated. And I was better for it in the long run. Lesson learned.

And then, an intervention from a higher power. Word came that the project had been scrapped and that the mapping was not needed. My screw-up stopped right there. Well, at least it didn’t cause further damage. But that lesson never left me, and I never forgot those words: “Don’t take this personal” as I mentored others over the years and continued to learn myself.

The measuring tools may change, but the sound procedures and practices of applying them properly never will. I am sure I am not the only one who had to learn that the hard way.

This article appeared in xyHt‘s e-newsletter, Field Notes. We email it once a month, and it covers a variety of land surveying topics in a conversational tone. You’re welcome to subscribe to the e-newsletter here. (You’ll also receive the twice-monthly Pangaea newsletter with your subscription.)

The post The “Ah Ha” Moments, Part 2 appeared first on xyHt.

  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 
xyHt by Nicholas Duggan Frgs Cgeog (gis) - 2w ago

The late Roger Tomlinson is credited with being the father of GIS, but, without the commercial success of Jack and Laura Dangermond’s company, Esri, GIS would never have touched our lives the way it has.

This year Esri celebrates its 50th year, though you won’t find any huge announcements from them about it as Jack has been modest, saying that it is “about the users” (his words).

The company started very small as a land-use consulting firm called the Environmental Systems Research Institute, with only a couple of staff members and funded with Jack’s own savings.

Esri went on to develop computerized techniques for solving geographic problems. In the mid-1970s, it pioneered a map-based information system that allowed it to gain large clients.

It wasn’t until 1981 that Esri released its first commercial application called ARC/INFO. If you’re privileged to have ever used a 1980s computer you’ll realize how innovative that must have been, dealing with lines and points. It seems simple by today’s standards, but then it was ground-breaking.

In the same year Esri held its first user conference with a modest attendance of 18 users.

Esri developed larger projects; in 1983 this included working with the United Nations Environmental Programme on a project that involved the development of high-resolution world maps.

In 1986, Esri released a new product called PC ARC/INFO, designed for personal computers and home use. It was a milestone in GIS as it accelerated users to 1,500 by 1988. Esri also started to open offices in Canada, France, and other parts of the United States.

It was the release of ArcView in 1992 that changed GIS as we know it. Within the first six months it sold more than 10,000 copies. GIS had become fully accessible to the world. You could use it on your home desktop machine and work with geographic data in a system that was affordable and easy.

By 1998, Esri had more than 100,000 users and released ArcView 3.2. This changed the world: it put GIS in the hands of everyone who wanted it. We could plot GPS positions on a map; we could analyze overlying risks and constraints. There was finally a way of visualizing the geospatial data around us in a way that had never been available to us before.

We had heard of GIS and what it meant, but it wasn’t until we used it in ArcView 3 that we truly understood the power of what we could do with it.

By 2003 approximately 12,000 people from 135 countries attended the company’s annual UC. It was shortly after this that ArcGIS 9 was released, the software that everyone knows and loves, the foundation of GIS that many people learned at school and that still lingers in an updated guise as ArcGIS 10.7 (at time of print).

Although Jack Dangermond and Esri aren’t accredited as being the founders of GIS, they should be noted as the enablers. We shouldn’t be celebrating just Esri being 50 years old; we should also celebrate GIS being 50 years old.

If there were any question about how important GIS is, I refer you to the recent announcement by Professor Crowther of Crowther Lab that GIS, or rather Esri GIS, may have saved the planet. Their work gives us the knowledge and tools to offset the 10 gigatons of carbon we release into the atmosphere every year. I cannot think of a better 50th birthday present!

The post The Enablers of GIS appeared first on xyHt.

Read for later

Articles marked as Favorite are saved for later viewing.
close
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Separate tags by commas
To access this feature, please upgrade your account.
Start your free month
Free Preview