Most individuals define geography as a field of study that deals with maps, yet this definition is only partially correct. A better definition of geography may be the study of natural and human-constructed phenomena relative to a spatial dimension.
The discipline of geography has a history that stretches over many centuries. Over this period, geography has evolved and developed into an essential form of human scholarship. Examining the historical evolution of geography as a discipline provides some essential insights concerning its character and methodology. These insights are also helpful in gaining a better understanding of the nature of physical geography.
As stated by Dartmounth Library, “the study of the interrelationships between people, place, and environment, and how these vary spatially and temporally across and between locations. Whereas physical geography concentrates on spatial and environmental processes that shape the natural world and tends to draw on the natural and physical sciences for its scientific underpinnings and methods of investigation, human geography concentrates on the spatial organization and processes shaping the lives and activities of people, and their interactions with places and nature. Human geography is more allied with the social sciences and humanities, sharing their philosophical approaches and methods.
What distinguishes human geography from other related disciplines, such as development, economics, politics, and sociology, are the application of a set of core geographical concepts to the phenomena under investigation, including space, place, scale, landscape, mobility, and nature. These concepts foreground the notion that the world operates spatially and temporally, and that social relations do not operate independently of place and environment, but are thoroughly grounded in and through them.”
1.1 Scientific Inquiry
Science is a path to gaining knowledge about the natural world. The study of science also includes the body of knowledge that has been collected through scientific inquiry. Scientists conduct scientific investigations by asking testable questions that can be systematically observed and careful evidenced collected. Then they use logical reasoning and some imagination to develop a testable idea, called a hypothesis, along with explanations to explain the idea. Finally, scientists design and conduct experiments based on their hypotheses.
Science seeks to understand the fundamental laws and principles that cause natural patterns and govern natural processes. It is more than just a body of knowledge; science is a way of thinking that provides a means to evaluate and create new knowledge without bias. At its best, science uses objective evidence over subjective evidence to reach sound and logical conclusions.
Truth in science is a difficult concept, and this is because science is falsifiable, which means an initial explanation (hypothesis) is testable and able to be proven false. A scientific theory can never completely be proven correct; it is only after exhaustive attempts to falsify competing for ideas and variations that the theory is assumed to be true. While it may seem like a weakness, the strength behind this is that all scientific ideas have stood up to scrutiny, which is not necessarily true for non-scientific ideas and procedures. In fact, it is the ability to prove current ideas wrong that is a driving force in science and has driven many scientific careers.
Early Scientific Thought
Western science began in ancient Greece, specifically Athens, and early democracies like Athens encouraged individuals to think more independently than the in past when kings ruled most civilizations. Foremost among these early philosopher/scientists was Aristotle, born in 384 B.C.E., who contributed to foundations of knowledge and science. Aristotle was a student of Plato and a tutor to Alexander the Great, who would conquer the Persian Empire as far as India, spreading Greek culture in the process. Aristotle used deductive reasoning, applying what he thought he knew to establish a new idea (if A, then B).
Deductive reasoning starts with generalized principles or established or assumed knowledge and extends them to new ideas or conclusions. If a deductive conclusion is derived from sound principles, then the conclusion has a high degree of certainty. This contrasts with inductive reasoning which begins from new observations and attempts to discern the underlying principles that explain the observations. Inductive reasoning relies on evidence to infer a conclusion and does not have the perceived certainty of deductive reasoning. Both are important in science. Scientists take existing principles and laws and see if these explain observations. Also, they make new observations and seek to determine the principles and laws that underlie them. Both emphasize the two most important aspects of science: observations and inferences.
Greek culture was absorbed by the Romans. The Romans controlled people and resources in their Empire by building an infrastructure of roads, bridges, and aqueducts. Their road network helped spread Greek culture and knowledge throughout the Empire. The fall of the Roman Empire ushered in the Medieval period in Europe in which scientific progress in Europe was largely overlooked. During Europe’s Medieval period, science flourished in the Middle East between 800 and 1450 CE as the Islamic civilization developed. Empirical experimentation grew during this time and was a key component of the scientific revolution that started in 17th century Europe. Empiricism emphasizes the value of evidence gained from experimentation and observations of the senses. Because of the respect, others hold for Aristotle’s wisdom and knowledge, his logical approach was accepted for centuries and formed an essential basis for understanding nature. The Aristotelian approach came under criticism by 17th-century scholars of the Renaissance.
As science progressed, certain aspects of science that could not be experimented and sensed awaited the development of new technologies, such as atoms, molecules, and the deep-time of geology. The Renaissance, following the Medieval period between the fourteenth and seventeenth centuries, was a great awakening of artistic and scientific thought and expression in Europe.
The foundational example of the modern scientific approach is the understanding of the solar system. The Greek astronomer Claudius Ptolemy, in the second century, using an Aristotelian approach and mathematics, observed the Sun, Moon, and stars moving across the sky and deductively reasoned that Earth must be at the center of the universe with the celestial bodies circling Earth. Ptolemy even had mathematical, astronomical calculations that supported his argument. The view of the cosmos with Earth at its center is called the geocentric model.
In contrast, early Renaissance scholars used new instruments such as the telescope to enhance astronomical observations and developed new mathematics to explain those observations. These scholars proposed a radically new understanding of the cosmos, one in which Earth and the other planets orbited around the centrally located Sun. This is known as the heliocentric model, and astronomer Nicolaus Copernicus (1473-1543) was the first to offer a solid mathematical explanation for it around 1543.
1.2 The Scientific Method
Science and scientists are wary of situations which either discourage or avoid the process of falsifiability. If a statement or an explanation of a phenomenon cannot be tested or does not meet scientific standards, then it is not considered science, but instead is considered a pseudoscience. Falsifiability separates science from pseudoscience. Pseudoscience is a collection of ideas that may appear scientific but does not use the scientific method. An example of pseudoscience is astrology which is a belief system that the movement of celestial bodies influences human behavior. This is not to be confused with astronomy which is the scientific study of celestial bodies and the cosmos. There are many celestial observations associated with astrology, but astrology does not use the scientific method. Conclusions in astrology are not based on evidence and experiments, and its statements are not falsifiable.
Science is also a social process. Scientists share their ideas with peers at conferences for guidance and feedback. A scientist’s research paper and data are rigorously reviewed by many qualified peers before publication. Research results are not allowed to be published by a reputable journal or publishing house until other scientists who are experts in the field have determined that the methods are scientifically sound and the conclusions are reasonable. Science aims to “weed out” misinformation, invalid research results, and wild speculation. Thus, the scientific process is slow, cautious, and conservative. Scientists do not jump to conclusions, but wait until an overwhelming amount of evidence from many independent researchers points to the same conclusion before accepting a scientific concept.
Science is the realm of facts and observations, not moral judgments. Scientists might enjoy studying tornadoes, but their opinion that tornadoes are exciting is not essential to learning about them. Scientists increase our technological knowledge, but science does not determine how or if we use that knowledge. Scientists learned to build an atomic bomb, but scientists did not decide whether or when to use it. Scientists have accumulated data on warming temperatures; their models have shown the likely causes of this warming. However, although scientists are primarily in agreement on the causes of global warming, they cannot force politicians or individuals to pass laws or change behaviors.
For science to work, scientists must make some assumptions. The rules of nature, whether simple or complex, are the same everywhere in the universe. Natural events, structures, and landforms have natural causes, and evidence from the natural world can be used to learn about those causes. The objects and events in nature can be better understood through careful, systematic study. Scientific ideas can change if we gather new data or learn more. An idea, even one that is accepted today, may need to be modified or be entirely replaced if new evidence contradicts previous scientific ideas. However, the body of scientific knowledge can grow and evolve because some theories become more accepted with repeated testing or old theories are modified or replaced with new knowledge.
Scientific research may be done to build knowledge or to solve problems and lead to scientific discoveries and technological advances. Pure research often aids in the development of applied research. Sometimes the results of pure research may be applied long after the pure research was completed. Sometimes something unexpected is discovered while scientists are conducting their research. Some ideas are not testable. For example, supernatural phenomena, such as stories of ghosts, werewolves, or vampires, cannot be tested. Scientists describe what they see, whether in nature or a laboratory.
The scientific method is a series of steps that help to investigate To answer those questions; scientists use data and evidence gathered from observations, experience, or experiments to answer their questions.
However, scientific inquiry rarely proceeds in the same sequence of steps outlined by the scientific method. For example, the order of the steps might change because more questions arise from the data that is collected. Still, to come to valid conclusions, logical, repeatable steps of the scientific method must be followed.
The most important thing a scientist can do is to ask questions.
- What makes Mount St. Helens more explosive and dangerous than the volcano on Mauna Loa, Hawaii?
- What makes the San Andreas fault different than the Wasatch Fault?
- Why does Earth have so many varied life forms but other planets in the solar system do not?
- What impacts could a warmer planet have on weather and climate systems?
Earth science can answer testable questions about the natural world. What makes a question impossible to test? Some untestable questions are whether ghosts exist or whether there is life after death. A testable question might be about how to reduce soil erosion on a farm. A farmer has heard of a planting method called “no-till farming.” Using this process eliminates the need for plowing the land. The farmer’s question is: Will no-till farming reduce the erosion of the farmland?
A scientist will first try to find answers to their questions by researching what may already be known about the topic. This information will allow the scientist to create a good experimental design. If this question has already been answered, the research may be enough, or it may lead to new questions. For example, a farmer researches no-till farming on the Internet, at the library, at the local farming supply store, and elsewhere. She learns about various farming methods, what types of fertilizers are best to use and what the best crop spacing would be. From her research, she also learns that no-till farming can be a way to reduce carbon dioxide emissions into the atmosphere, which helps in the fight against global warming.
With the information collected from background research, the scientist creates a plausible explanation for their question, called a hypothesis. The hypothesis must directly answer the question at hand and must be testable. Having a hypothesis guides a scientist in designing experiments and interpreting data. Referring back to the farmer, they would hypothesize that no-till farming will decrease soil erosion on hills of similar steepness as compared to the traditional farming technique because there will be fewer disturbances to the soil.
To support or refute a hypothesis, the scientist must collect data. A great deal of logic and methodology goes into designing tests to collect data so the data can answer scientific questions. Experiment or observation usually collect data, and sometimes improvements in technology will allow new tests to address a hypothesis better.
Observation is used to collect data when it is not possible for practical or ethical reasons to perform experiments. Written descriptions of observations are qualitative data-based, and this data is used to answer critical questions. Scientists use many various types of instruments to make quantitative measurements, typically based on the scientific discipline. Electron microscopes can be used to explore tiny objects or telescopes to learn about the universe. Probes or drones make observations where it is too dangerous or too impractical for scientists to go.
Objective observation is without personal bias and is observed the same by all individuals. Humans, by their nature, do have a bias, so no observation is entirely free of bias; the goal is to be as free of bias as possible. A subjective observation is based on a person’s feelings and beliefs and is unique to that individual. Science uses quantitative over qualitative objective observations whenever possible.
A quantitative observation can be measured and expressed with a number. Qualitative observations are not numeric but rather verbal descriptions. For example, saying a rock is red or heavy is qualitative. However, measuring the exact color of red, or measuring the density of the rock (which can be traced to the proportion of certain minerals in the rock) is quantitative. This is why quantitative measurements are much more useful to scientists. Calculations can be done on specific numbers, but cannot be done on qualitative values.
A good experiment must have one factor that can be manipulated or changed, called the independent variable. The rest of the factors must remain the same, called experimental controls. The outcome of the experiment, or what changes as a result of the experiment, is the dependent variable because the variable “depends” on the independent variable.
Return to the example of the farmer. She decides to experiment on two separate hills that have similar steepness and receives similar amounts of sunshine. On one hill, the farmer uses a traditional farming technique that includes plowing. On the other, she uses a no-till technique, spacing plants farther apart and using specialized equipment for planting. The plants on both hillsides receive identical amounts of water and fertilizer, and she measures plant growth on both hillsides. In this experiment:
- What is the independent variable?
- What are the experimental controls?
- What is the dependent variable?
The independent variable is the farming technique – either traditional or no-till – because that is what is being manipulated. For a fair comparison of the two farming techniques, the two hills must have the same slope and the same amount of fertilizer and water. These are the experimental controls. The amount of erosion is the dependent variable. It is what the farmer is measuring. During an experiment, scientists make many measurements. Data in the form of numbers is quantitative.
Data gathered from advanced equipment usually goes directly into a computer, or the scientist may put the data into a database. The data can then be statistically analyzed to determine specific relationships between different categories of data. Statistics can make sense of the variability in a data set.
In just about every human endeavor, errors are unavoidable. In a scientific experiment, this is called experimental error. Systematic errors may be inherent in the experimental setup so that the numbers are always skewed in one direction. For example, a scale may always measure one-half ounce high. The error will disappear if the scale is re-calibrated. Random errors may occur because a measurement is not precisely analyzed. For example, a stopwatch may be stopped too soon or too late. Data errors can be corrected by taking several measurements and averaging them. If a result is inconsistent with the results from other samples and many tests have conducted, it is likely that a mistake was made in that experiment and the inconsistent data point can be thrown out.
Scientists study graphs, tables, diagrams, images, descriptions, and all other available data to draw conclusions from their experiments. Is there an answer to the question based on the results of the experiment? Was the hypothesis supported? Some experiments support a hypothesis entirely, and some do not. If a hypothesis is shown to be wrong, the experiment was not a failure because all experimental results contribute to knowledge. Experiments that do or do not support a hypothesis may lead to even more questions and more experiments.
Let’s return to the farmer again. After a year, the farmer finds that erosion on the traditionally farmed hill is 2.2 times greater than erosion on the no-till hill. She also discovers that the plants on the no-till plots are taller and have higher amounts of moisture in the soil. From this, she decides to convert to no-till farming for future crops. The farmer continues researching to see what other factors may help reduce erosion.
As scientists conduct experiments and make observations to test a hypothesis, over time they collect many data points. If a hypothesis explains all the data and none of the data contradicts the hypothesis, over time the hypothesis becomes a theory. A scientific theory is supported by many observations and has no significant inconsistencies. A theory must continually be tested and revised by the scientific community. Once a theory has been developed, it can be used to predict behavior. A theory provides a model of reality that is simpler than the phenomenon itself. Even a theory can be overthrown if conflicting data is discovered. However, a longstanding theory that has lots of evidence to back it up is less likely to be overthrown than a newer theory.
Science does not prove anything beyond a shadow of a doubt. Scientists seek evidence that supports or refutes an idea. If there is no significant evidence to refute an idea and a lot of evidence to support it, the idea is accepted. The more lines of evidence that support an idea, the more likely it will stand the test of time. The value of a theory is when scientists can use it to offer reliable explanations and make accurate predictions.
Introductory science courses usually deal with accepted scientific theory and credible ideas that oppose the standardly accepted theories are not included. This makes it easier for students to understand the complex material. A student who further studies a discipline will encounter controversies later. However, at the introductory level, the established science is presented. This section on science denial discusses how some groups of people argue that some established scientific theories are wrong, not based on their scientific merit but rather on the ideology of the group.
When an organization or person denies or doubts the scientific consensus on an issue in a non-scientific way, it is referred to as science denial. The rationale is rarely based on objective scientific evidence but instead is based on subjective social, political, or economic reasons. Science denial is a rhetorical argument that has been applied selectively to issues that some organizations or people oppose. Three (past and current) issues that demonstrate this are: 1) the teaching of evolution in public schools, 2) early links between tobacco smoke and cancer, and 3) anthropogenic (human-caused) climate change. Of these, denial of climate change has a strong connection with geographic science. A climate denier denies explicitly or doubts the scientific conclusions of the community of scientists who specifically study climate.
Science denial generally uses three rhetorical but false arguments. The first argument tries to undermine the science by claiming that the methods are flawed or that the science is unsettled. The idea that the science is unsettled creates doubt for a regular citizen. A sense of doubt delays action. Scientists typically avoid claiming universal truths and use language that conveys a sense of uncertainty because scientific ideas change as more evidence is uncovered. This avoidance of universal truths should not be confused with the uncertainty of scientific conclusions.
The second argument attacks the researchers who’re findings they disagree with. They claim that ideology and an economic agenda motivate the scientific conclusions. They claim that the researchers want to “get more funding for their research” or “expand government regulation.” This is an ad hominem argument in which a person’s character is attacked instead of the merit of their argument.
The third argument is to demand equal media coverage for a “balanced” view in an attempt to validate the false controversy. This includes equal time in the educational curriculum. For example, the last rhetorical argument would demand that explanations for evolution or climate change be discussed along with alternative religious or anthropogenic ones, even when there is little scientific evidence supporting the alternatives. Conclusions based on the scientific method should not be confused with alternative conclusions based on ideologies. Two entirely different methods for concluding nature are involved and do not belong together in the same course.
The formation of new conclusions based on the scientific method is the only way to change scientific conclusions. We would not teach Flat Earth geology along with plate tectonics because Flat Earthers do not follow the scientific method. Using the fact that scientists avoid universal truths and change their ideas as more evidence is uncovered is how the scientific process works and shouldn’t be seen as meaning that the science is unsettled. Because of widespread scientific illiteracy, these arguments are used by those who wish to suppress science and misinform the general public.
In a classic case of science denial, the rhetorical arguments were used in the 1950’s, 60’s, and 70’s by the tobacco industry and their scientists to deny the links between tobacco and cancer. Once it became clear that the tobacco industry could not show that tobacco did not cause cancer, their next strategy was to create a sense of “doubt” on the science. They suggested that the science was not yet fully understood and issue needed more study. Thus legislative action should be delayed. This false sense of “doubt” is the key component that misleads the public and prevents action. This is currently being employed by those who deny human involvement in climate change.
1.3 Geographic Inquiry
Geography is the study of the physical and cultural environments of the earth. What makes geography different from other disciplines is its focus on spatial inquiry and analysis. Geographers also try to look for connections between things such as patterns, movement and migration, trends, and so forth. This process is called a geographic or spatial inquiry. To do this, geographers go through a geographic methodology that is quite similar to the scientific method, but again with a geographic or spatial emphasis. This method can be simplified as the geographic inquiry process.
- Ask a geographic question. This means to ask questions about spatial relationships in the physical and cultural environment.
- Acquire geographic resources. Identify data and information that is needed to answer a particular geographic or spatial question.
- Explore geographic data. Turn the data into maps, tables, and graphs, and look for patterns and relationships.
- Analyze geographic information. Determine the patterns and relationships concerning the geographic or spatial question.
“Knowing where something is, how its location influences its characteristics, and how its location influences relationships with other phenomena are the foundation of geographic thinking. This mode of investigation asks you to see the world and all that is in it in spatial terms. Like other research methods, it also asks you to explore, analyze, and act upon the things you find. It is also important to recognize that this is the same method used by professionals around the world working to address social, economic, political, environmental, and a wide-range of other scientific issues.” (ESRI)
History of Geography and Physical Geography
Some of the first genuinely geographical studies occurred more than four thousand years ago. The primary purpose of these early investigations was to map features and places observed as explorers traveled to new lands. At this time, Chinese, Egyptian, and Phoenician civilizations were beginning to explore the places and spaces within and outside of their homelands. The earliest evidence of such explorations comes from the archaeological discovery of a Babylonian clay tablet map that dates back to 2300 BC.
The early Greeks were the first civilization to practice a form of geography that was more than just map-making or cartography. Greek philosophers and scientist were also interested in learning about spatial nature of human and physical features found on the Earth. One of the first Greek geographers was Herodotus (circa 484 – 425 BC). Herodotus wrote some volumes that described the human and physical geography of the various regions of the Persian Empire.
The ancient Greeks were also interested in the form, size, and geometry of the Earth. Aristotle (circa 384 – 322 BC) hypothesized and scientifically demonstrated that the Earth had a spherical shape. Evidence for this idea came from observations of lunar eclipses. Lunar eclipses occur when the Earth casts its circular shadow on to the moon’s surface. The first individual to accurately calculate the circumference of the Earth was the Greek geographer Eratosthenes (circa 276 – 194 BC). Eratosthenes calculated the equatorial circumference to be 40,233 kilometers using simple geometric relationships. This first calculation was unusually accurate. Measurements of the Earth using modern satellite technology have computed the circumference to be 40,072 kilometers.
Most of the Greek accomplishments in geography were passed on to the Romans. Roman military commanders and administrators used this information to guide the expansion of their Empire. The Romans also made several notable additions to geographical knowledge. Strabo (circa 64 BC – 20 AD) wrote a 17 volume series called “Geographia.” Strabo claimed to have traveled widely and recorded what he had seen and experienced from a geographical perspective. In his series of books, Strabo describes the cultural geographies of the various societies of people found from Britain to as far east as India and south to Ethiopia and as far north as Iceland. Strabo also suggested a definition of geography that is quite complementary to the way many human geographers define their discipline today. This definition suggests that geography aimed to “describe the known parts of the inhabited world… to write the assessment of the countries of the world [and] to treat the differences between countries.”
During the second century AD, Ptolemy (circa 100 – 178 AD) made some important contributions to geography. Ptolemy’s publication Geographike hyphegesis or “Guide to Geography” compiled and summarize much of the Greek and Roman geographic information accumulated at that time. Some of his other notable contributions include the creation of three different methods for projecting the Earth’s surface on a map, the calculation of coordinate locations for some eight thousand places on the Earth, and development of the concepts of geographical latitude and longitude.
Little academic progress in geography occurred after the Roman period. For the most part, the Middle Ages (5th to 13th centuries AD) were a time of intellectual stagnation. In Europe, the Vikings of Scandinavia were the only group of people carrying out active exploration of new lands. In the Middle East, Arab academics began translating the works of Greek and Roman geographers starting in the 8th century and began exploring southwestern Asia and Africa. Some of the essential intellectuals in Arab geography were Al-Idrisi, Ibn Battutah, and Ibn Khaldun. Al-Idrisi is best known for his skill at making maps and for his work of descriptive geography Kitab nuzhat al-mushtaq fi ikhtiraq al-afaq or “The Pleasure Excursion of One Who Is Eager to Traverse the Regions of the World.” Ibn Battutah and Ibn Khaldun are well known for writing about their extensive travels of North Africa and the Middle East.
During the Renaissance (1400 to 1600 AD) numerous journeys of geographical exploration were commissioned by a variety of nation-states in Europe. Most of these voyages were financed because of the potential commercial returns from resource exploitation. The voyages also provided an opportunity for scientific investigation and discovery. These voyages also added many significant contributions to geographic knowledge. Important explorers of this period include Christopher Columbus, Vasco da Gama, Ferdinand Magellan, Jacques Cartier, Sir Martin Frobisher, Sir Francis Drake, John and Sebastian Cabot, and John Davis. Also during the Renaissance, Martin Behaim created a spherical globe depicting the Earth in its true three-dimensional form in 1492. Behaim’s invention was a significant advance over two-dimensional maps because it created a more realistic depiction of the Earth’s shape and surface configuration.
In the 17th century, Bernhardus Varenius (1622-1650) published an important geographic reference titled Geographia generalis (General Geography: 1650). In this volume, Varenius used direct observations and primary measurements to present some new ideas concerning geographic knowledge. This work continued to be a standard geographic reference for about a 100 years. Varenius also suggested that the discipline of geography could be subdivided into three distinct branches. The first branch examines the form and dimensions of the Earth. The second sub-discipline deals with tides, climatic variations over time and space, and other variables that are influenced by the cyclical movements of the Sun and moon. Together these two branches form the early beginning of what we collectively now call physical geography. The last branch of geography examined distinct regions on the Earth using comparative cultural studies. Today, this area of knowledge is called cultural geography.
During the 18th century, the German philosopher Immanuel Kant (1724-1804) proposed that human knowledge could be organized in three different ways. One way of organizing knowledge was to classify its facts according to the type of objects studied. Accordingly, zoology studies animals, botany examines plants, and geology involves the investigation of rocks. The second way one can study things is according to a temporal dimension. This field of knowledge is, of course, called history. The last method of organizing knowledge involves understanding facts relative to spatial relationships. This field of knowledge is commonly known as geography. Kant also divided geography into some sub-disciplines. He recognized the following six branches: Physical, mathematical, moral, political, commercial, and theological geography.
Geographic knowledge saw strong growth in Europe and the United States in the 1800s. This period also saw the emergence of a number of societies interested in geographic issues. In Germany, Alexander von Humboldt, Carl Ritter, and Fredrich Ratzel made substantial contributions to human and physical geography. Humboldt’s publication Kosmos (1844) examines the geology and physical geography of the Earth. This work is considered by many academics to be a milestone contribution to geographic scholarship. Late in the 19th Century, Ratzel theorized that the distribution and culture of the Earth’s various human populations were strongly influenced by the natural environment. The French geographer Paul Vidal de la Blanche opposed this revolutionary idea. Instead, he suggested that human beings were a dominant force shaping the form of the environment. The idea that humans were modifying the physical environment was also prevalent in the United States. In 1847, George Perkins Marsh gave an address to the Agricultural Society of Rutland County, Vermont. The subject of this speech was that human activity was having a destructive impact on the land, primarily through deforestation and land conversion. This speech also became the foundation for his book Man and Nature or The Earth as Modified by Human Action, first published in 1864. In this publication, Marsh warned of the ecological consequences of the continued development of the American frontier.
During the first 50 years of the 1900s, many academics in the field of geography extended the various ideas presented in the previous century to studies of small regions all over the world. Most of these studies used descriptive field methods to test research questions. Starting around 1950, geographic research experienced a shift in methodology. Geographers began adopting a more scientific approach that relied on quantitative techniques. The quantitative revolution was also associated with a change in the way in which geographers studied the Earth and its phenomena. Researchers now began investigating process rather than a mere description of the event of interest. Today, the quantitative approach is becoming even more prevalent due to advances in computer and software technologies.
In 1964, William Pattison published an article in the Journal of Geography (1964, 63: 211-216) that suggested that modern Geography was now composed of the following four academic traditions:
- Spatial Tradition – the investigation of the phenomena of geography from a strictly spatial perspective.
- Area Studies Tradition – the geographical study of an area on the Earth at either the local, regional, or global scale.
- Human-Land Tradition – the geographical study of human interactions with the environment.
- Earth Science Tradition – the study of natural phenomena from a spatial perspective. This tradition is best described as theoretical physical geography.
Today, the academic traditions described by Pattison are still dominant fields of geographical investigation. However, the frequency and magnitude of human-mediated environmental problems have been on a steady increase since the publication of this notion. These increases are the result of a growing human population and the consequent increase in the consumption of natural resources. As a result, an increasing number of researchers in geography are studying how humans modify the environment. A significant number of these projects also develop strategies to reduce the negative impact of human activities on nature. Some of the dominant themes in these studies include environmental degradation of the hydrosphere, atmosphere, lithosphere, and biosphere; resource use issues; natural hazards; environmental impact assessment; and the effect of urbanization and land-use change on natural environments.
Considering all of the statements presented concerning the history and development of geography, we are now ready to formulate a somewhat coherent definition. This definition suggests that geography, in its purest form, is the field of knowledge that is concerned with how phenomena are spatially organized. Physical geography attempts to determine why natural phenomena have particular spatial patterns and orientation. This online textbook will focus primarily on the Earth Science Tradition. Some of the information that is covered in this textbook also deals with the alterations of the environment because of human interaction. These pieces of information belong in the Human-Land Tradition of geography.
1.4 Elements of Geography
Geography is a discipline that integrates a wide variety of subject matter. Almost any area of human knowledge can be examined from a spatial perspective. Physical geography’s primary subdisciplines study the Earth’s atmosphere (meteorology and climatology), animal and plant life (biogeography), physical landscape (geomorphology), soils (pedology), and waters (hydrology). Some of the principal areas of study in human geography include human society and culture (social and cultural geography), behavior (behavioral geography), economics (economic geography), politics (political geography), and urban systems (urban geography).
Holistic synthesis connects knowledge from a variety of academic fields in both human and physical geography. For example, the study of the enhancement of the Earth’s greenhouse effect and the resulting global warming requires a multidisciplinary approach for complete understanding. The fields of climatology and meteorology are required to understand the physical effects of adding additional greenhouse gases to the atmosphere’s radiation balance. The field of economic geography provides information on how various forms of human economic activity contribute to the emission of greenhouse gases through fossil fuel burning and land-use change. Combining the knowledge of both of these academic areas gives us a more comprehensive understanding of why this environmental problem occurs.
The holistic nature of geography is both a strength and a weakness. Geography’s strength comes from its ability to connect functional interrelationships that are not generally noticed in narrowly defined fields of knowledge. The most apparent weakness associated with the geographical approach is related to the fact that holistic understanding is often too simple and misses essential details of cause and effect.
1.5 Scope of Physical Geography
Physical geography examines and investigates natural phenomena spatially. Specifically, physical geography studies the spatial patterns of weather and climate, soils, vegetation, animals, water in all its forms, and landforms. Physical geography also examines the interrelationships of these phenomena to human activities. This sub-field of geography is academically known as the Human-Land Tradition. This area of geography has seen increased interest in the last few decades because of the acceleration of human-induced environmental degradation. Thus, physical geography’s scope is much broader than the simple spatial study of nature. It also involves the investigation of how humans are influencing nature.
Academics studying physical geography and other related earth sciences are rarely generalists. Most are in fact highly specialized in their fields of knowledge and tend to focus themselves in one of the following distinct areas of understanding in physical geography:
- Geomorphology – studies the various landforms on the Earth’s surface.
- Pedology – is concerned with the study of soils.
- Biogeography – is the science that investigates the spatial relationships of plants and animals.
- Hydrology – is interested in the study of water in all its forms.
- Meteorology – studies the circulation of the atmosphere over short time spans.
- Climatology – studies the effects of weather on life and examines the circulation of the atmosphere over longer time spans.
The above fields of knowledge generally have a primary role in introductory textbooks dealing with physical geography. Introductory physical geography textbooks can also contain information from other related disciplines including:
- Geology – studies the form of the Earth’s surface and subsurface, and the processes that create and modify it.
- Ecology – the scientific study of the interactions between organisms and their environment.
- Oceanography – the science that examines the biology, chemistry, physics, and geology of oceans.
- Cartography – the technique of making maps.
- Astronomy – the science that examines celestial bodies and the cosmos.
- And more…
1.6 Understanding Maps
A map can be defined as a graphic representation of the real world. Because of the infinite nature of our Universe, it is impossible to capture all of the complexity found in the real world. For example, topographic maps abstract the three-dimensional real world at a reduced scale on a two-dimensional plane of paper.
Maps are used to display both cultural and physical features of the environment. Standard topographic maps show a variety of information including roads, land-use classification, elevation, rivers and other water bodies, political boundaries, and the identification of houses and other types of buildings. Some maps are created with particular goals in mind, with an intended purpose.
Most maps allow us to specify the location of points on the Earth’s surface using a coordinate system. For a two-dimensional map, this coordinate system can use simple geometric relationships between the perpendicular axes on a grid system to define spatial location. Two types of coordinate systems are currently in general use in geography: the geographical coordinate system and the rectangular (also called Cartesian) coordinate system.
Geographical Coordinate System
The geographical coordinate system measures location from only two values, despite the fact that the locations are described for a three-dimensional surface. The two values used to define location are both measured relative to the polar axis of the Earth. The two measures used in the geographic coordinate system are called latitude and longitude.
Latitude is an angular measurement north or south of the equator relative to a point found at the center of the Earth. This central point is also located on the Earth’s rotational or polar axis. The equator is the starting point for the measurement of latitude. The equator has a value of zero degrees. A line of latitude or parallel of 30° North has an angle that is 30° north of the plane represented by the equator. The maximum value that latitude can attain is either 90° North or South. These lines of latitude run parallel to the rotational axis of the Earth.
Lines connecting points of the same latitude, called parallels, has lines running parallel to each other. The only parallel that is also a great circle is the equator. All other parallels are small circles. The following are the most important parallel lines:
- Equator, 0 degrees
- Tropic of Cancer, 23.5 degrees N
- Tropic of Capricorn, 23.5 degrees S
- Arctic Circle, 66.5 degrees N
- Antarctic Circle, 66.5 degrees S
- North Pole, 90 degrees N (infinitely small circle)
- South Pole, 90 degrees S (infinitely small circle)
Longitude is the angular measurement east and west of the Prime Meridian. The position of the Prime Meridian was determined by international agreement to be in-line with the location of the former astronomical observatory at Greenwich, England. Because the Earth’s circumference is similar to a circle, it was decided to measure longitude in degrees. The number of degrees found in a circle is 360. The Prime Meridian has a value of zero degrees. A line of longitude or meridian of 45° West has an angle that is 45° west of the plane represented by the Prime Meridian. The maximum value that a meridian of longitude can have is 180° which is the distance halfway around a circle. This meridian is called the International Date Line. Designations of west and east are used to distinguish where a location is found relative to the Prime Meridian. For example, all of the locations in North America have a longitude that is designated west.
At 180 degrees of the Prime Meridian in the Pacific Ocean is the International Date Line. The line determines where the new day begins in the world. Now because of this, the International Date Line is not a straight line, rather it follows national borders so that a country is not divided into two separate days.
Ultimately, when parallel and meridian lines are combined, the result is a geographic grid system that allows users to determine their exact location on the planet.
This may be an appropriate time to briefly discuss the fact that the earth is round and not flat.
Great and Small Circles
Much of Earth’s grid system is based on the location of the North Pole, South Pole, and the Equator. The poles are an imaginary line running from the axis of Earth’s rotation. The plane of the equator is an imaginary horizontal line that cuts the earth into two halves. This brings up the topic of great and small circles. A great circle is any circle that divides the earth into a circumference of two halves. It is also the largest circle that can be drawn on a sphere. The line connecting any points along a great circle is also the shortest distance between those two points.
Examples of great circles include the Equator, all lines of longitude, the line that divides the earth into day and night called the circle of illumination, and the plane of the ecliptic, which divides the earth into equal halves along the equator. Small circles are circles that cut the earth, but not into equal halves.
Universal Transverse Mercator System (UTM)
Another commonly used method to describe a location on the Earth is the Universal Transverse Mercator (UTM) grid system. This rectangular coordinate system is metric, incorporating the meter as its basic unit of measurement. UTM also uses the Transverse Mercator projection system to model the Earth’s spherical surface onto a two-dimensional plane. The UTM system divides the world’s surface into 60, six-degree longitude-wide zones that run north-south. These zones start at the International Date Line and are successively numbered in an eastward direction. Each zone stretches from 84° North to 80° South. In the center of each of these zones is a central meridian.
Location is measured in these zones from a false origin, which is determined relative to the intersection of the equator and the central meridian for each zone. For locations in the Northern Hemisphere, the false origin is 500,000 meters west of the central meridian on the equator. Coordinate measurements of location in the Northern Hemisphere using the UTM system are made relative to this point in meters in eastings (longitudinal distance) and northings (latitudinal distance). The point defined by the intersection of 50° North and 9° West would have a UTM coordinate of Zone 29, 500000 meters east (E), 5538630 meters north (N). In the Southern Hemisphere, the origin is 10,000,000 meters south and 500,000 meters west of the equator and central meridian, respectively. The location found at 50° South and 9° West would have a UTM coordinate of Zone 29, 500000 meters E, 4461369 meters N (remember that northing in the Southern Hemisphere is measured from 10,000,000 meters south of the equator).
The UTM system has been modified to make measurements less confusing. In this modification, the six degree wide zones are divided into smaller pieces or quadrilaterals that are eight degrees of latitude tall. Each of these rows is labeled, starting at 80° South, with the letters C to X consecutively with I and O being omitted. The last row X differs from the other rows and extends from 72 to 84° North latitude (twelve degrees tall). Each of the quadrilaterals or grid zones are identified by their number/letter designation. In total, 1200 quadrilaterals are defined in the UTM system.
The quadrilateral system allows us to define location further using the UTM system. For the location 50° North and 9° West, the UTM coordinate can now be expressed as Grid Zone 29U, 500000 meters E, 5538630 meters N.
Each UTM quadrilateral is further subdivided into a number of 100,000 by 100,000-meter zones. These subdivisions are coded by a system of letter combinations where the same two-letter combination is not repeated within 18 degrees of latitude and longitude. Within each of the 100,000-meter squares, one can specify a location to one-meter accuracy using a five-digit eastings and northings reference system.
The UTM grid system is displayed on all United States Geological Survey (USGS) and National Topographic Series (NTS) of Canada maps. On USGS 7.5-minute quadrangle maps (1:24,000 scale), 15-minute quadrangle maps (1:50,000, 1:62,500, and standard-edition 1:63,360 scales), and Canadian 1:50,000 maps the UTM grid lines are drawn at intervals of 1,000 meters. Both are shown either with blue ticks at the edge of the map or by full blue grid lines. On USGS maps at 1:100,000 and 1:250,000 scale and Canadian 1:250,000 scale maps a full UTM grid is shown at intervals of 10,000 meters.
1.7 Time Zones
Before the late nineteenth century, timekeeping was primarily a local phenomenon. Each town would set their clocks according to the motions of the Sun. Noon was defined as the time when the Sun reached its maximum altitude above the horizon. Cities and towns would assign a clockmaker to calibrate a town clock to these solar motions. This town clock would then represent “official” time, and the citizens would set their watches and clocks accordingly.
The ladder half of the nineteenth century was a time of increased movement of humans. In the United States and Canada, large numbers of people were moving west and settlements in these areas began expanding rapidly. To support these new settlements, railroads moved people and resources between the various cities and towns. However, because of the nature of how local time was kept, the railroads experience significant problems in constructing timetables for the various stops. Timetables could only become more efficient if the towns and cities adopted some standard method of keeping time.
In 1878, Canadian Sir Sanford Fleming suggested a system of worldwide time zones that would simplify the keeping of time across the Earth. Fleming proposed that the globe should be divided into 24 time zones, every 15 degrees of longitude in width. Since the world rotates once every 24 hours on its axis and there are 360 degrees of longitude, each hour of Earth rotation represents 15 degrees of longitude.
Railroad companies in Canada and the United States began using Fleming’s time zones in 1883. In 1884, an International Prime Meridian Conference was held in Washington D.C. to adopt the standardized method of timekeeping and determined the location of the Prime Meridian. Conference members agreed that the longitude of Greenwich, England would become zero degrees longitude and established the 24 time zones relative to the Prime Meridian. It was also proposed that the measurement of time on the Earth would be made relative to the astronomical measurements at the Royal Observatory at Greenwich. This time standard was called Greenwich Mean Time (GMT).
Today, many nations operate on variations of the time zones suggested by Sir Fleming. In this system, time in the various zones is measured relative the Coordinated Universal Time (UTC) standard at the Prime Meridian. Coordinated Universal Time became the standard legal reference of time all over the world in 1972. UTC is determined from atomic clocks that are coordinated by the International Bureau of Weights and Measures (BIPM) located in France. The numbers located at the bottom of the time zone map indicate how many hours each zone is earlier (negative sign) or later (positive sign) than the Coordinated Universal Time standard. Also, note that national boundaries and political matters influence the shape of the time zone boundaries. For example, China uses a single time zone (eight hours ahead of Coordinated Universal Time) instead of five different time zones.
1.8 Coordinate Systems and Projections
Add content here…
Distance on Maps
Depicting the Earth’s three-dimensional surface on a two-dimensional map creates a variety of distortions that involve distance, area, and direction. It is possible to create maps that are somewhat equidistance. However, even these types of maps have some form of distance distortion. Equidistance maps can only control distortion along either lines of latitude or lines of longitude. Distance is often correct on equidistance maps only in the direction of latitude.
On a map that has a large scale, 1:125,000 or larger, distance distortion is usually insignificant. An example of a large-scale map is a standard topographic map. On these maps measuring straight line distance is simple. Distance is first measured on the map using a ruler. This measurement is then converted into a real-world distance using the map’s scale. For example, if we measured a distance of 10 centimeters on a map that had a scale of 1:10,000, we would multiply 10 (distance) by 10,000 (scale). Thus, the actual distance in the real world would be 100,000 centimeters.
Measuring distance along map features that are not straight is a little more difficult. One technique that can be employed for this task is to use several straight-line segments. The accuracy of this method is dependent on the number of straight-line segments used. Another method for measuring curvilinear map distances is to use a mechanical device called an opisometer. This device uses a small rotating wheel that records the distance traveled. The recorded distance is measured by this device either in centimeters or inches.
Direction on Maps
Like distance, direction is difficult to measure on maps because of the distortion produced by projection systems. However, this distortion is quite small on maps with scales larger than 1:125,000. Direction is usually measured relative to the location of North or South Pole. Directions determined from these locations are said to be relative to True North or True South. The magnetic poles can also be used to measure direction. However, these points on the Earth are located in spatially different spots from the geographic North and South Pole. The North Magnetic Pole is located at 78.3° North, 104.0° West near Ellef Ringnes Island, Canada. In the Southern Hemisphere, the South Magnetic Pole is located in Commonwealth Day, Antarctica and has a geographical location of 65° South, 139° East. The magnetic poles are also not fixed over time and shift their spatial position over time.
Topographic maps typically have a declination diagram drawn on them. On Northern Hemisphere maps, declination diagrams describe the angular difference between Magnetic North and True North. On the map, the angle of True North is parallel to the depicted lines of longitude. Declination diagrams also show the direction of Grid North. Grid North is an angle that is parallel to the easting lines found on the Universal Transverse Mercator (UTM) grid system.
In the field, the direction of features is often determined by a magnetic compass which measures angles relative to Magnetic North. Using the declination diagram found on a map, individuals can convert their field measures of magnetic direction into directions that are relative to either Grid or True North. Compass directions can be described by using either the azimuth system or the bearing system. The azimuth system calculates direction in degrees of a full circle. A full circle has 360 degrees. In the azimuth system, north has a direction of either the 0 or 360°. East and West have an azimuth of 90° and 270°, respectively. Due south has an azimuth of 180°.
The bearing system divides direction into four quadrants of 90 degrees. In this system, north and south are the dominant directions. Measurements are determined in degrees from one of these directions.
Geography is about spatial understanding, which requires an accurate grid system to determine absolute and relative location. Absolute location is the exact x- and y- coordinate on the Earth. Relative location is the location of something relative to other entities. For example, when someone uses his or her GPS on his or her smartphone or car, they will put in an absolute location. However, as they start driving, the device tells them to turn right or left relative to objects on the ground.
1.9 Geospatial Technology
Data, data, data… data is everywhere. There are two basic types of data to be familiar with: spatial and non-spatial data. Spatial data, also called geospatial data, is data directly related to a specific location on Earth. Geospatial data is becoming “big business” because it is not just data, but data that can be located, tracked, patterned, and modeled based on other geospatial data. Census information that is collected every ten years is an example of spatial data. Non-spatial data is data that cannot be traced to a specific location, including the number of people living in a household, enrollment within a specific course, or gender information. However, non-spatial data can easily become spatial data if it can connect in some way to a location. Geospatial technology specialists have a method called geocoding that can be used to give non-spatial data a geographic location. Once data has a spatial component associated with it, the type of questions that can be asked dramatically changes.
Remote sensing can be defined as the collection of data about an object from a distance. Humans and many other types of animals accomplish this task with the aid of eyes or by the sense of smell or hearing. Geographers use the technique of remote sensing to monitor or measure phenomena found in the Earth’s lithosphere, biosphere, hydrosphere, and atmosphere. Remote sensing of the environment by geographers is usually done with the help of mechanical devices known as remote sensors. These gadgets have a significantly improved ability to receive and record information about an object without any physical contact. Often, these sensors are positioned away from the object of interest by using helicopters, planes, and satellites. Most sensing devices record information about an object by measuring an object’s transmission of electromagnetic energy from reflecting and radiating surfaces.
Remote sensing imagery has many applications in mapping land-use and cover, agriculture, soils mapping, forestry, city planning, archaeological investigations, military observation, and geomorphological surveying, among other uses. For example, foresters use aerial photographs for preparing forest cover maps, locating possible access roads, and measuring quantities of trees harvested. Specialized photography using color infrared film has also been used to detect disease and insect damage in forest trees.
Satellite Remote Sensing
The simplest form of remote sensing uses photographic cameras to record information from visible or near-infrared wavelengths. In the late 1800s, cameras were positioned above the Earth’s surface in balloons or kites to take oblique aerial photographs of the landscape. During World War I, aerial photography played an important role in gathering information about the position and movements of enemy troops. These photographs were often taken from airplanes. After the war, civilian use of aerial photography from airplanes began with the systematic vertical imaging of large areas of Canada, the United States, and Europe. Many of these images were used to construct topographic and other types of reference maps of the natural and human-made features found on the Earth’s surface.
The development of color photography following World War II gave a more natural depiction of surface objects. Color aerial photography also substantially increased the amount of information gathered from an object. The human eye can differentiate many more shades of color than tones of gray. In 1942, Kodak developed color infrared film, which recorded wavelengths in the near-infrared part of the electromagnetic spectrum. This film type had good haze penetration and the ability to determine the type and health of vegetation.
In the 1960s, a revolution in remote sensing technology began with the deployment of space satellites. From their high vantage-point, satellites have an extended view of the Earth’s surface. The first meteorological satellite, TIROS-1, was launched by the United States using an Atlas rocket on April 1, 1960. This early weather satellite used vidicon cameras to scan broad areas of the Earth’s surface. Early satellite remote sensors did not use conventional film to produce their images. Instead, the sensors digitally capture the images using a device similar to a television camera. Once captured, this data is then transmitted electronically to receiving stations found on the Earth’s surface.
In the 1970s, the second revolution in remote sensing technology began with the deployment of the Landsat satellites. Since this 1972, several generations of Landsat satellites with their Multispectral Scanners (MSS) have been providing continuous coverage of the Earth for almost 30 years. Current, Landsat satellites orbit the Earth’s surface at an altitude of approximately 700 kilometers. The spatial resolution of objects on the ground surface is 79 x 56 meters. Complete coverage of the globe requires 233 orbits and occurs every 16 days. The Multispectral Scanner records a zone of the Earth’s surface that is 185 kilometers wide in four wavelength bands: band 4 at 0.5 to 0.6 micrometers, band 5 at 0.6 to 0.7 micrometers, band 6 at 0.7 to 0.8 micrometers, and band 7 at 0.8 to 1.1 micrometers. Bands 4 and 5 receive the green and red wavelengths in the visible light range of the electromagnetic spectrum. The last two bands image near-infrared wavelengths. A second sensing system was added to Landsat satellites launched after 1982. This imaging system, known as the Thematic Mapper, records seven wavelength bands from the visible to far-infrared portions of the electromagnetic spectrum. Also, the ground resolution of this sensor was enhanced to 30 x 20 meters. This modification allows for significantly improved clarity of imaged objects.
The usefulness of satellites for remote sensing has resulted in several other organizations launching their own devices. In France, the SPOT (Satellite Pour l’Observation de la Terre) satellite program has launched five satellites since 1986. Since 1986, SPOT satellites have produced more than 10 million images. SPOT satellites use two different sensing systems to image the planet. One sensing system produces black and white panchromatic images from the visible band (0.51 to 0.73 micrometers) with a ground resolution of 10 x 10 meters. The other sensing device is multispectral capturing green, red, and reflected infrared bands at 20 x 20 meters. SPOT-5, which was launched in 2002, is much improved from the first four versions of SPOT satellites. SPOT-5 has a maximum ground resolution of 2.5 x 2.5 meters in both panchromatic mode and multispectral operation.
Radarsat-1 was launched by the Canadian Space Agency in November, 1995. As a remote sensing device, Radarsat is entirely different from the Landsat and SPOT satellites. Radarsat is an active remote sensing system that transmits and receives microwave radiation. Landsat and SPOT sensors passively measure reflected radiation at wavelengths roughly equivalent to those detected by our eyes. Radarsat’s microwave energy penetrates clouds, rain, dust, or haze and produces images regardless of the Sun’s illumination allowing it to image in darkness. Radarsat images have a resolution between 8 to 100 meters. This sensor has found important applications in crop monitoring, defense surveillance, disaster monitoring, geologic resource mapping, sea-ice mapping and monitoring, oil slick detection, and digital elevation modeling.
Today, the GOES (Geostationary Operational Environmental Satellite) system of satellites provides most of the remotely sensed weather information for North America. To cover the entire continent and adjacent oceans, two satellites are employed in a geostationary orbit. The western half of North America and the eastern Pacific Ocean is monitored by GOES-10, which is directly above the equator and 135° West longitude. The eastern half of North America and the western Atlantic are cover by GOES-8. The GOES-8 satellite is located overhead of the equator and 75° West longitude. Advanced sensors aboard the GOES satellite produce a continuous data stream so images can be viewed at any instance. The imaging sensor produces visible and infrared images of the Earth’s terrestrial surface and oceans. Infrared images can depict weather conditions even during the night. Another sensor aboard the satellite can determine vertical temperature profiles, vertical moisture profiles, total precipitable water, and atmospheric stability.
Principles of Object Identification
Most people have no problem identifying objects from photographs taken from an oblique angle. Such views are natural to the human eye and are part of our everyday experience. However, most remotely sensed images are taken from an overhead or vertical perspective and distances quite removed from ground level. Both of these circumstances make the interpretation of natural and human-made objects somewhat difficult. In addition, images obtained from devices that receive and capture electromagnetic wavelengths outside human vision can present views that are quite unfamiliar.
To overcome the potential difficulties involved in image recognition, professional image interpreters use some characteristics to help them identify remotely sensed objects. Some of these characteristics include:
- Shape: this characteristic alone may serve to identify many objects. Examples include the long linear lines of highways, the intersecting runways of an airfield, the perfectly rectangular shape of buildings, or the recognizable shape of an outdoor baseball diamond.
- Size: noting the relative and absolute sizes of objects are essential in their identification. The scale of the image determines the absolute size of an object. As a result, it is essential to recognize the scale of the image to be analyzed.
- Image Tone or Color: all objects reflect or emit specific signatures of electromagnetic radiation. In most cases, related types of objects emit or reflect similar wavelengths of radiation. Also, the types of the recording device and recording media produce images that are reflective of their sensitivity to a particular range of radiation. As a result, the interpreter must be aware of how the object being viewed will appear on the image examined. For example, on color, infrared images vegetation has a color that ranges from pink to red rather than the usual tones of green.
- Pattern: many objects arrange themselves in typical patterns. This is especially true of human-made phenomena. For example, orchards have a systematic arrangement imposed by a farmer, while natural vegetation usually has a random or chaotic pattern.
- Shadow: shadows can sometimes be used to get a different view of an object. For example, an overhead photograph of a towering smokestack or a radio transmission tower normally presents an identification problem. This difficulty can be overcome by photographing these objects at Sun angles that cast shadows. These shadows then display the shape of the object on the ground. Shadows can also be a problem to interpreters because they often conceal things found on the Earth’s surface.
- Texture: imaged objects display some degree of coarseness or smoothness. This characteristic can sometimes be useful in object interpretation. For example, we would typically expect to see textural differences when comparing an area of grass with field corn. Texture, just like object size, is directly related to the scale of the image.
Global Positioning Systems
Another type of geospatial technology is global positioning systems (GPS) and a key technology for acquiring accurate control points on Earth’s surface. Now to determine the location of that GPS receiver on Earth’s surface, a minimum of four satellites is required using a mathematical process called triangulation. Usually, the process of triangulation requires a minimum of three transmitters, but because the energy sent from the satellite is traveling at the speed of light, minor errors in calculation could result in significant location errors on the ground. Thus, a minimum of four satellites is often used to reduce this error. This process using the geometry of triangles to determine location is used not only in GPS, but a variety of other location needs like finding the epicenter of earthquakes.
A user can use a GPS receiver to determine their location on Earth through a dynamic conversation with satellites in space. Each satellite transmits orbital information called the ephemeris using a highly accurate atomic clock along with its orbital position called the almanac. The receiver will use this information to determine its distance from a single satellite using the equation D = rt, where D = distance, r = rate or the speed of light (299,792,458 meters per second), and t = time using the atomic clock. The atomic clock is required because the receiver is trying to calculate distance, using energy that is transmitted at the speed of light. Time will be fractions of a second and requires a “time clock” up the utmost accuracy.
Determination of location in field conditions was once a difficult task. In most cases, it required the use of a topographic map and landscape features to estimate location. However, technology has now made this task very simple. Global Positioning Systems (GPS) can calculate one’s location to an accuracy of about 30-meters. These systems consist of two parts: a GPS receiver and a network of many satellites. Radio transmissions from the satellites are broadcasted continually. The GPS receiver picks up these broadcasts and through triangulation calculates the altitude and spatial position of the receiving unit. A minimum of three satellite is required for triangulation.
GPS receivers can determine latitude, longitude, and elevation anywhere on or above the Earth’s surface from signals transmitted by a number of satellites. These units can also be used to determine direction, distance traveled, and determine routes of travel in field situations.
Geographic Information Systems
The advent of cheap and powerful computers over the last few decades has allowed for the development of innovative software applications for the storage, analysis, and display of geographic data. Many of these applications belong to a group of software known as Geographic Information Systems (GIS). Many definitions have been proposed for what constitutes a GIS. Each of these definitions conforms to the particular task that is being performed. A GIS does the following activities:
- Measurement of natural and human-made phenomena and processes from a spatial perspective. These measurements emphasize three types of properties commonly associated with these types of systems: elements, attributes, and relationships.
- Storage of measurements in digital form in a computer database. These measurements are often linked to features on a digital map. The features can be of three types: points, lines, or areas (polygons).
- Analysis of collected measurements to produce more data and to discover new relationships by numerically manipulating and modeling different pieces of data.
- Depiction of the measured or analyzed data in some type of display – maps, graphs, lists, or summary statistics.
The first computerized GIS began its life in 1964 as a project of the Rehabilitation and Development Agency Program within the government of Canada. The Canada Geographic Information System (CGIS) was designed to analyze Canada’s national land inventory data to aid in the development of land for agriculture. The CGIS project was completed in 1971, and the software is still in use today. The CGIS project also involved a number of key innovations that have found their way into the feature set of many subsequent software developments.
From the mid-1960s to 1970s, developments in GIS were mainly occurring at government agencies and universities. In 1964, Howard Fisher established the Harvard Lab for Computer Graphics where many of the industries early leaders studied. The Harvard Lab produced a number of mainframe GIS applications including SYMAP (Synagraphic Mapping System), CALFORM, SYMVU, GRID, POLYVRT, and ODYSSEY. ODYSSEY was first modern vector GIS, and many of its features would form the basis for future commercial applications. Automatic Mapping System was developed by the United States Central Intelligence Agency (CIA) in the late 1960s. This project then spawned the CIA’s World Data Bank, a collection of coastlines, rivers, and political boundaries, and the CAM software package that created maps at different scales from this data. This development was one of the first systematic map databases. In 1969, Jack Dangermond, who studied at the Harvard Lab for Computer Graphics, co-founded Environmental Systems Research Institute (ESRI) with his wife, Laura. ESRI would become in a few years the dominant force in the GIS marketplace and create ArcInfo and ArcView software. The first conference dealing with GIS took place in 1970 and was organized by Roger Tomlinson (key individual in the development of CGIS) and Duane Marble (professor at Northwestern University and early GIS innovator). Today, numerous conferences dealing with GIS run every year attracting thousands of attendants.
In the 1980s and 1990s, many GIS applications underwent substantial evolution regarding features and analysis power. Many of these packages were being refined by private companies who could see the future commercial potential of this software. Some of the popular commercial applications launched during this period include ArcInfo, ArcView, MapInfo, SPANS GIS, PAMAP GIS, INTERGRAPH, and SMALLWORLD. It was also during this period that many GIS applications moved from expensive minicomputer workstations to personal computer hardware.
There is a technology that exists that can bring together remote sensing data, GPS data points, spatial and non-spatial data, and spatial statistics into a single, dynamic system for analysis and that is a geographic information system (GIS). A GIS is a powerful database system that allows users to acquire, organize, store, and most importantly analyze information about the physical and cultural environments. A GIS views the world as overlaying physical or cultural layers, each with quantifiable data that can be analyzed. A single GIS map of a national forest could have layers such as elevation, deciduous trees, evergreens, soil type, soil erosion rates, rivers and tributaries, major and minor roads, forest health, burn areas, regrowth, restoration, animal species type, trails, and more. Each of these layers would contain a database of information specific to that layer.
GIS combines computer cartography with a database management system. GIS consists of three subsystems: (1) an input system that allows for the collection of data to be used and analyzed for some purpose; (2) computer hardware and software systems that store the data, allow for data management and analysis, and can be used to display data manipulations on a computer monitor; (3) an output system that generates hard copy maps, images, and other types of output.
Two basic types of data are typically entered into a GIS. The first type of data consists of real-world phenomena and features that have some form of spatial dimension. Usually, these data elements are depicted mathematically in the GIS as either points, lines, or polygons that are referenced geographically (or geocoded) to some type of coordinate system. This type data is entered into the GIS by devices like scanners, digitizers, GPS, air photos, and satellite imagery. The other type of data is sometimes referred to as an attribute. Attributes are pieces of data that are connected or related to the points, lines, or polygons mapped in the GIS. This attribute data can be analyzed to determine patterns of importance. Attribute data is entered directly into a database where it is associated with feature data.
Within the GIS database, a user can enter, analyze, and manipulate data that is associated with some spatial element in the real world. The cartographic software of the GIS enables one to display the geographic information at any scale or projection and as a variety of layers which can be turned on or off. Each layer would show some different aspect of a place on the Earth. These layers could show things like a road network, topography, vegetation cover, streams and water bodies, or the distribution of annual precipitation received.
Nearly every discipline, career path, or academic pursuit uses geographic information systems because of the vast amount of data and information about the physical and cultural world. Disciplines and career paths that use GIS include: conservation, ecology, disaster response and mitigation, business, marketing, engineering, sociology, demography, astronomy, transportation, health, criminal justice and law enforcement, travel and tourism, news media, and the list could endlessly go on.
Now, GIS primarily works from two different spatial models: raster and vector. Raster models in GIS are images much like a digital picture. Each image is broken down into a series of columns and rows of pixels, and each pixel is georeferenced to somewhere on Earth’s surface is represents a specific numeric value – usually a specific color or wavelength within the electromagnetic spectrum. Most remote sensing images come into a GIS as a raster layer.
The other type of GIS model is called a vector model. Vector-based GIS models are based on the concept of points that are again georeferenced (i.e., given an x-, y-, and possibly z- location) to somewhere specific on the ground. From points, lines can be created by connecting a series of points and areas can be created by closing loops of vector lines. For each of these vector layers, a database of information can be attributed to it. So for example, a vector line of rivers could have a database associated with it such as length, width, stream flow, government agencies responsible for it, and anything else the GIS user wants to be connected to it. What these vector models represent is also a matter of scale. For example, a city can be represented as a point or a polygon depending on how zoomed in you are to the location. A map of the world would show cities as points, whereas a map of a single county may show the city as a polygon with roads, populations, pipes, or grid systems within it.
This work is licensed under a Creative Commons Attribution 4.0 International License.