Happy New Year Mountains!

by Gong Liu January 20, 2012 09:27

The Video

At the beginning of the year I made the following Youtube video as a retrospective of my hiking/climbing activities in 2011. I have been asked how to make it and hence this post.

2011 was a fantastic year for me as a hiker. I logged some 430 hiking miles, summited more than 30 unique mountain peaks, visited 6 national parks, and met a bunch of awesome people. Here is a list of all the hiking trips I did in 2011:

How to Make It

Making the video involves these steps:

  1. Create the 3D greeting signs
  2. Erect the signs on mountain tops
  3. Create a tour visiting each sign
  4. Play and record the tour as a video

I'll explain each step in detail. But before you continue, make sure you have the following software tools:

Create the 3D greeting signs

Here are the steps to create a 3D greeting sign:

  1. Launch Google SketchUp
  2. Click Tools -> 3D Text
  3. Enter a greeting text and set the height and thickness (extruded) of the text, as shown in Fig. 1. You want to make the sign big enough so that it can be seen miles away. I set the text height 30 ft. As a comparison the famous Hollywood Sign is 45 ft tall.
  4. Use the Paint Bucket on the tool bar to paint the text in the color or texture of your choice, as shown in Fig. 2.
  5. Click File -> Export -> 3D Model..., enter a name for the file where the 3D model will be saved, select .dae format, which is what Google Earth recognizes, and press OK. This will save your 3D sign to a .dae file, say "Hello.dae".
  6. Repeat steps 2 - 5 for each additional sign you have. Each sign must have its own .dae file.


Fig. 1. Enter text and its height and thinkness


Fig. 2. Color the text using Paint Bucket tool

Erect the signs on mountain tops

Basically you'll need to create a KML document that looks like this:

<?xml version="1.0" encoding="UTF-8"?>
<kml xmlns="http://www.opengis.net/kml/2.2" xmlns:gx="http://www.google.com/kml/ext/2.2"
xmlns:kml="http://www.opengis.net/kml/2.2" xmlns:atom="http://www.w3.org/2005/Atom"> 
<Document>
  <name>Hello Sign Demo</name>
  <Folder>
   <name>Hello</name>
   <Placemark id="Hello">
    <name>Hello</name>
    <description><![CDATA[Hello Hollywood]]></description>
    <Model>
     <altitudeMode>relativeToGround</altitudeMode>
     <Location>
      <longitude>-118.322212</longitude>
      <latitude> 34.134282</latitude>
      <altitude>15</altitude>
    </Location>
    <Orientation>
     <heading>0</heading>
     <tilt>-90</tilt>
     <roll>0</roll>
    </Orientation>
    <Scale>
     <x>1</x>
     <y>1</y>
     <z>1</z>
    </Scale>
    <Link>
     <href>Hello.dae</href>
    </Link>
   </Model>
  </Placemark>
 </Folder>
</Document>
</kml>

Here are some highlights about the KML document:

  • The 3D model, "Hello.dae", is referenced under the <Link> tag. This assumes that "Hello.dae" is in the same folder as the KML document.
  • <tilt>-90</tilt> erects the sign from horizontal position to vertical position.
  • If you want to align the long side of the sign (width) to a mountain ridge or a certain direction, adjust the <roll>.
  • You need to know the latitude/longitude coordinates of the mountain top where you want to put the sign. The coordinates go under the <Location> tag.
  • The altitude is relative to the ground as indicated by the <altitudeMode> tag. Within the long side of the sign the mountain top may not be level. If you set altitude to 0 (ground level), part of the sign may be buried underground. You need to adjust <altitude> so that the whole bottom of the sign is just above the ground.
  • I put the <Placemark> that represents the 3D sign in a <Folder> for easy organizing. For each additional sign, just add another <Folder>.

Fig. 3 shows what it looks like when the KML document is loaded in Google Earth.


Fig. 3. The "Hello" sign is put on top of Mt Lee, above the famous Hollywood Sign.

Create a tour visiting each sign

In Google Earth a tour in its simplest form is just a series of camera points looking at an object of interest. Google Earth has a built-in touring feature that automatically generates a tour based on a predefined path. The automatically generated tour is fixed in terms of range (distance from camera to object), tilt (camera angle) and speed. It is most suitable for flying by a hike trail, for example. In our case, we are interested in individual 3D signs, not the paths between them. We want to see a sign/mountain top in different distances, angles and directions. This means we have to create the tour manually, one camera point at a time. It can be quite time consuming, if we have a lot of signs and each sign requires multiple camera points. However, the flip side is that we are our own film director. How cool is that?! We decide the camera settings, the shots, and overall effects. We can swipe across the sign, pierce through the sign, approach and leave the sign in any angle, direction and speed we like. It's all up to our imagination and creativity!

As an example I have created 4 views (camera points) around our "Hello" sign, as shown in Fig. 4.


Fig. 4. Four camera points looking at "Hello" sign

Here are the steps to turn a view into a <gx:FlyTo> tag - the building block of a tour:

  1. Manipulate Google Earth to form the view you want using the navigation controls.
  2. Click the Record a Tour button. The tour recorder appears at the bottom-left of the map screen.
  3. Click the Record button twice in quick succession. This creates a mini tour of current view and brings up the Tour Player at the bottom-left of the map screen.
  4. Click the Save button on the Player and the OK button in the New Tour dialog box. This adds an Untitled Tour entry to the KML document tree in the left-hand panel.
  5. Right click the Untitled Tour entry and select Cut.
  6. Open a text editor, such as Notepad, and Paste the Untitled Tour to the text editor.
  7. You should see two <gx:FlyTo> tags in the editor. Copy the first one and paste it to the KML document that contains your 3D sign.
  8. Repeat steps 1 - 7 for each additional camera point.
  9. Repeat steps 1 - 8 for each additional sign.

The final result looks like this:

<Document>
  <name>Hello Sign Demo</name>
  <open>1</open>
  <gx:Tour>
    <name>Play me!</name>
    <gx:Playlist>
 
      <gx:FlyTo>
        <gx:duration>7.0</gx:duration>
         <LookAt>
          <longitude>-118.3215414751394</longitude>
          <latitude>34.13467476710581</latitude>
          <altitude>0</altitude>
          <heading>-0.7037646645994531</heading>
          <tilt>77.98378575832396</tilt>
          <range>299.6121338753385</range>
          <gx:altitudeMode>relativeToSeaFloor</gx:altitudeMode>
        </LookAt>
      </gx:FlyTo>

      <gx:Wait>
        <gx:duration>6.0</gx:duration>
      </gx:Wait>

      <gx:FlyTo>
        <gx:duration>3.0</gx:duration>
        <gx:flyToMode>smooth</gx:flyToMode>
        <LookAt>
          <longitude>-118.3219234024494</longitude>
          <latitude>34.13433034303296</latitude>
          <altitude>0</altitude>
          <heading>53.48179823778509</heading>
          <tilt>78.30176989580932</tilt>
          <range>200.1038065956998</range>
          <gx:altitudeMode>relativeToSeaFloor</gx:altitudeMode>
        </LookAt>
      </gx:FlyTo>

      <gx:FlyTo>
        <gx:duration>3.0</gx:duration>
        <gx:flyToMode>smooth</gx:flyToMode>
        <LookAt>
          <longitude>-118.3221082458166</longitude>
          <latitude>34.134407451771</latitude>
          <altitude>0</altitude>
          <heading>130.6419059159441</heading>
          <tilt>79.00718054353172</tilt>
          <range>130.4629410309265</range>
          <gx:altitudeMode>relativeToSeaFloor</gx:altitudeMode>
        </LookAt>
      </gx:FlyTo>

      <gx:FlyTo>
        <gx:duration>3.0</gx:duration>
        <gx:flyToMode>smooth</gx:flyToMode>
        <LookAt>
          <longitude>-118.3208095498427</longitude>
          <latitude>34.13467780708835</latitude>
          <altitude>0</altitude>
          <heading>-123.1213268708733</heading>
          <tilt>76.57841332587158</tilt>
          <range>73.18637186655157</range>
          <gx:altitudeMode>relativeToSeaFloor</gx:altitudeMode>
        </LookAt>
      </gx:FlyTo>

    </gx:Playlist>
  </gx:Tour>

  <Folder>
    <!-- the Hello Placemark here -->
  </Folder>
</Document>

Some key points about the KML document:

  • For each <gx:FlyTo> add a <gx:duration> tag to allow some time for Google Earth to fly from previous point and current point.
  • For the first <gx:FlyTo> add extra <gx:Wait> time to allow tiled map images fully loaded. This is especially helpful for a slow computer with slow internet connection.
  • For each subsequent <gx:FlyTo> add <gx:flyToMode>smooth</gx:flyToMode> tag to allow smooth transition from one point to the next.
  • You can use <gx:SoundCue> to add background music to the tour.
  • You can use <gx:AnimatedUpdate> to pop up a balloon with additional information about the sign or the mountain.

Play and record the tour as a video

Once you have created the KML document that includes your 3D signs and the tour visiting each of them, you can play the tour by opening the KML in Google Earth, highlighting the tour entry in the left-hand panel, and clicking the Play Tour button, as shown in Fig. 5.


Fig. 5. Play a tour

To make a video out of the tour, you can use one of the following three methods, depending on what software and hardware you have:

  • Using Google Earth Pro. Google Earth Pro Edition has a built-in video capture function.
  • Using 3rd party screen recording program. I tried Camtasia Studio but I had difficulty to record a HD video with it. Playing the tour and recording the screen in full HD at the same time turned out to be too much for my lil' old computer.
  • Recording off the screen with your camcorder or camera. I ended up using this method for the video in this post. Just set my point n' shoot camera on a tripod and recorded the screen and sound in one shot. Not the best quality but good enough for a Youtube video.

As a final touch I used the simple video editing features in Youtube to add annotations and credits. Of course you can use your favorite video edting program to do the same and much more to make your video look more professional.

Downloads

Hello Sign Example HelloSignDemo.kmz 15.2 KB
Happy New Year Mountains with background music HappyNewYearMountains.kmz 25.7 MB
Happy New Year Mountains w/o background music HappyNewYearMountains1.kmz 3.17 MB

Hiking Trail Fly-by Tours with Google Earth

by Gong Liu October 30, 2011 18:00

Mt. Baldy via Acorn Trail, Pine Mountain & Dawson Peak 

The route is considered one of the hardest routes to the Baldy Summit. This rollercoaster hike challenges you both ways, out and back.

Trail stats: 16 miles r.t., 7600 ft elev. gain.

Music: "A Little Piece" by The Jezabels

C2C (Cactus to Clouds)

C2C is ranked the #5 hardest dayhike in America by Backpacker Magazine for its insane elevation gain and mileage. Skyline Trail up, Marion Mountain Trail down.

Trail stats: 21.7 miles, 11200 ft elev. gain. 

Music: "'Till I Collapse" by Eminem & Nate Dogg 

Using Google Earth as Photo Location Finder

by Gong Liu September 14, 2011 05:19

Introduction

I am an avid hiker. I keep a photolog on Facebook for each of my hikes. The photolog documents places I hiked, things I saw and people I met. The other day when I was compiling my photolog for a recent backpacking trip to Banner Peak in Ansel Adams Wilderness (see a slideshow of the trip at the end of this post), I came across this article on the internet titled "Lonely Grave in the Sierra" by Dr. H. Galic, a professor of Stanford University. The article is about a forgotten memorial to two climbers who died on Banner Peak in the summer of 1934, and how the author relentlessly pursued every lead about the grave, the victims and people involved, and finally put the history puzzle pieces together to recreate the scenario of what happened on that fateful day. Along the way he recounted the history of mountaineering accidents and rescues in High Sierra. It's like an episode of History Detectives, a truly fascinating story.

As much as I enjoy the article I can't help but notice there is no mention about the exact location of the grave. In fact, Dr. Galic has some concerns about it:

"I am intentionally being vague about the exact location of the grave. If you are a genuine hiker or climber, and familiar with the area, you will easily recognize geographical features mentioned in the text. For others, who might see the grave site as a tourist attraction, and plan to come solely for that reason: Please do not bother, you will never find it."  

To me this only raises curiosity. First of all, I didn't know there was a grave before my Banner Peak trip. Now that I've learned about the grave and the story behind it, I really want to know where it is located so I can pay a visit to it next time around. Secondly, I'm a software engineer specialized in digital maps, GPS, and location-based services. I'm obsessed with finding locations. Call it occupational habit. I can't rest without putting the grave's location on a map somehow!  

So what exactly are the geographical features mentioned in the text that may help me locate the grave? Banner Peak, Mt Ritter, Lake Catherine, and Thousand Island Lake are mentioned multiple times in the text. These landmarks help to establish the general area of the grave. So at least I know in which "haystack" I should look for the "needle". Then he mentioned some remote valley where he found the grave on his way to the back side of Mt. Ritter. This helps to narrow down the search further because there are only a few valleys that can be used to access the back side of Mt. Ritter with reasonable effort. But that's pretty much as far as one can go by the text. Fortunately, in the article Dr. Galic included a few photos of the grave with mountains and valleys as backdrop. This makes me think: If a picture is worth a thousand words, is it possible that I can deduce location information from it? Since the grave is rather small, the photos must have been taken in close range. So if I find the location of the photos, I find the grave. 

Incidentally, I'm not the only one who is interested in finding the location of a photo. CIA/NSA is interested in it too! A recent IARPA (Intelligence Advanced Research Projects Activity) Solicitation expresses the desire of wanting a photo location finder. And this news article reveals that the spy agencies' intention is to use the finder to track down terrorists from propaganda photos and videos. I'm sure if such a system is ever developed, my little grave location mystery can be solved with it in a snap of fingers. But before then, before millions of dollars are spent on the project, let me just show you what can be done right now with existing technology.                

Getting the context of a photo 

Before we can find the location of a photo in any outdoor terrain, we need to know its context, such as its general area (country, region, time zone, nearby city, etc.), date/time, and perhaps camera settings. A good place to start for such information is the photo's  Exif (Exchangeable image file format) header. If we are lucky, if the photo happens to be "geotagged", we can get its exact location from the header. There are tools we can use to dump the header and see everything in it. Fig. 1 is a photo taken with a smartphone. What is special about it is that it is geotagged - when the photo was taken, the smartphone's built-in GPS was on and its latitude and longitude coordinates were written to the photo's Exif header. Fig. 2 is a dump of the Exif header. Toward the end of it are the latitude and longitude coordinates. If we upload the photo to an Exif-aware website such as Flickr, we can see the photo's exact location on the map. Please refer to my earlier post for more info about using geotagged photos.  

Fig. 1. A geotagged photo taken with a smartphone. Click to see original. 

Fig. 2. Exif header of above photo. 

Many smartphones are capable of tracking locations by means other than GPS. They use cell towers and WiFi networks for locating aid. The locations obtained this way are not as accurate as GPS, and thus may not be used for geotagging photos that are taken with these smartphones. But they are good enough for providing context information, assuming, of course, we have access to the location data and are able to use it to derive a photo's approximate location based on its time stamp. This article shows a visualization of iPhone location data. It is interesting to note that Apple's location tracking practice caused such an uproar for privacy concerns that Congressional hearings were held. 

Only photos taken with newer generation of digital cameras or smartphones may contain geotags. Most photos don't. Photos that have been edited with photo editing software may lose their original headers. Photos that have been uploaded to some websites (e.g. Facebook) may get their headers trimmed to bare minimum. And then there are software programs that allow a user to remove infomation from a photo's header before publishing. In all these cases we have to derive the context information of a photo from sources other than Exif header. For instance, the most obvious way is to ask the person who took the photo or the publisher who published the photo. Or we may extract that information from the photo's caption or related text, such as the case of our grave site photos. The point is that getting the context of a photo is more of an intelligent work than a software engineering one. Whoever wins the IARPA bid is likely to follow some sort of heuristic approach that incorporates human intelligence into the search algorithm.  

Finding photo location with Google Earth

Assuming we know the context of a photo, how are we going to find its location or the location of an object in the photo? The idea is simple: We'll need a 3D digital ground model, such as Google Earth. We identify some recognizable geographic features from both the photo and the digital model. We try to match the photo's perspective in relation to the geographic features with those in the digital model. We estimate the location of the object visually in the digital model, which yields a location measurement in terms of latitude, longitude, and elevation.

In Fig. 3 the left side is one of the original photos of the grave site by Dr. Galic. Notice that the grave is just a small bronze plaque on top of a pile of rocks. Since I have a rough idea about the general area from the story as well as from my field trip, I can quickly zoom in the area in Google Earth and locate the valley where the grave is supposed to be. With some tweaks I can make Google Earth display a perspective similar to that of the photo in relation to the background mountains, as shown in Fig. 3, the right side. By referencing the grave location in the photo I can mark my best estimate of the location of the grave in Google Earth with a pushpin. The pushpin location measurement can then be read at bottom-center of the Google Earth screen (in the red oval). It's just this simple! 

Fig. 3. Original photo #1 (left) by H. Galic and matched perspective in Google Earth (right). Click to enlarge. 

From this exercise the following points should be noted:

  • We got to appreciate the importance of the context information of a photo. Without it, we wouldn't know where to start. Sifting through all possible mountains, all possible perspectives would be really like finding a needle in a haystack.
  • The matching of the perspective of the photo in Google Earth involves quite a bit tweaking, manually. This is where I see the value of using an automated computer program for the job, as described in the IARPA Solicitation "automated geolocation technologies to work efficiently and accurately over all terrain and large search areas". To be fully automatic, the program must be able to recoganize by itself what key points or features to match between the photo and the digital model, a task that is easy for a human but can be very challenging for a computer.
  • Unlike Google Street View which provides photo realistic environment, Google Earth in rural areas or wilderness areas is only an approximate representation of reality. It lacks the necessary details to identify, for example, the spring and the meadow mentioned in the article, or the rock outcrop in the foreground of above photo. As a 3D model, it is created by interpolating surfaces from survey data points, and thus is limited in accuracy by whatever data source used.  

Because of all the inaccuracy and uncertainty involved from various sources, the photo, the digital model, the underlying data, and human factors, it would be nice to have an estimate of the error of our method. This can be done by having additional observations from either different photos by same observer or same photo by different observers. Observations of same photo by same person are not statistically valid because later oberservations may be influenced by earlier ones. Fig. 4 shows, side by side, a second photo of the grave site by Dr. Galic and a matched perspective in Google Earth, as well as a different estimate of the grave location (the yellow pushpin).

Fig. 4. Original photo #2 (left) by H. Galic and matched perspective in Google Earth (right). Click to enlarge. 

Now we have two observations resulting two estimated grave locations. Based on this data we can establish a set of concentric circular search areas that look like a bull's eye as shown in Fig. 5. The center of the bull's eye is the mid point of the line segment with the two estimated grave locations as end points. The radius of the innermost circle (red) is equal to half of the length of the above line segment. This radius actually represents one standard deviation of the observations in X-Y plane. The radius of the middle circle (yellow) is simply 2 times of the standard deviation, and the radius of the outermost circle (blue), 3 times of that. According to the 3-sigma rule, if the observation follows a normal distribution (it usually does as the error is from multiple independent sources), the probability of finding the grave in the red circle is about 68%, the probability in the yellow circle, 95%, and the probability in the blue circle, 99.7%. In our case, the largest search area (blue) is only about 600 ft across. Conducting a search in it should be quite manageable.

The bull's eye is drawn on a contour map. If we know information about the elevation of the grave, we can narrow down the search further, by searching only an elevation range within the bull's eye.

Note the above error calculation only applies to errors introduced when trying to locate a photo or an object on the photo in a digital model. It does not consider errors in the field, which can only be estimated with field data.   

 

Fig. 5. Bull's eye search areas. Click to enlarge. 

 

Now that I have marked the grave location on the map, I can finally rest easy. But I still have to find the real thing next time I'm there. I have high confidence that I will find it, and when I do I will take a picture of it with my geotagging-capable camera. That's when I will be really, truly satisfied Laughing

Bull's eye calculation for N observations

Let's say we have N observations of the location of an object of interest from a digital model  

(1)

where, xi - longitude, yi - latitude. 

The center of the bull's eye, (μx, μy), can be calculated as follows:

(2)
(3)

The standard deviations of longitude and latitude are given by Equs. 4 and 5, respectively.

(4)
(5)

The radiuses of the three circles, from the smallest to the biggest, can then be calculated as follows:

(6)
(7)
(8)

Slideshow: Banner Peak Expedition

Sinking Balls - Using Google Earth's Seafloor View

by Gong Liu July 21, 2011 14:01

Introduction

Recently Google, in collaboration with Columbia University and about a dozen research labs, has released a set of high res maps of seafloor terrain. See the following intro video about the new dataset:

This looks pretty amazing. Possible uses? You can probably use it as the background of some aquarium screensaver or some underwater game, for example. But before you start to develop your next Google Earth based hit game "Angry Penguins", you need to know how to sink an object properly. And this is what this post is about - sink an object in a physically correct way using Google Earth's new seafloor terrain. 

Formulation

When an object sinks in a medium, such as sea water, at certain velocity, there are three major forces involved, gravity, buoyant and drag. 

  

Fig. 1. Gravity, buoyant and drag forces exerted on a sinking object. 

The drag force is a function of the velocity of the sinking object and can be expressed as:

(1)

where,

      Fd - drag force
      Cd - drag coefficient
      ρ - the density of the medium
      A - the object's cross section area
      v - the velocity of the object 

The drag coefficient, Cd, is a function of a dimensionless number called Reynolds number, which in turn is a function of, among other factors, the object's velocity. However, within a relatively large range of Reynolds number, the drag coefficient remains approximately a constant (≈0.44). So in this post we'll stick with constant drag coefficient. In other words, the drag on the object is considered quadratic in the velocity.

Referring to Fig. 1, the equation of motion of the object reads:   

(2)

Here the second term to the right is the buoyant force. m* is the mass of the displaced medium. Substitute Equ. 1 into Equ. 2 and we have:

(3)

where,

(4)
(5)

      ρ - the density of the medium
      ρobj - the density of the object
      m - the mass of the object
      m* - the mass of the displaced medium 

The differential Equ. 3 has an analytical solution. For the initial condition

(6)

the solution (the velocity of the object at any moment t) is:

(7)

Since the hyperbolic tangent is bounded to (-1, 1), v(t) has a maximum value of -1/γ, which is called terminal velocity, i.e.

(8)

The negative sign indicates that the direction of velocity is toward the seafloor. The elevation of the object at any moment t can be obtained by integrating Equ. 7:

(9)

with the initial condition

(10)

When v0 = 0, Equs. 7 and 9 are reduced to

(11)

and

(12)

respectively. If you are interested in how all these are derived, please refer to the last section of this post.

Fig. 2 is a plot of Equ. 12 and Fig. 3, a plot of Equ. 11 for a basketball-sized object with various densities. 

 
Fig. 2. The object's elevation change over time


Fig. 3. The object's velocity change over time

The plots show that the heavier the object, the faster it sinks, and the higher its terminal velocity. For a good sized object like the one in the example, it only takes a few seconds for it to reach its terminal velocity. After that it maintains the terminal velocity until it lands on the seafloor. So if you want to simulate an object sinking into very deep water, you can pretty much ignore the cumbersome Equ. 7 and use Equ. 8 instead.   

Implementation

I wrote a small program in C# that utilizes the above formulas to calculate the dynamics of multiple sinking objects and spits out a KML file, which can be rendered in Google Earth with the new seafloor terrain. Fig. 4 shows a screenshot of the user interface of the program. 

    

Fig. 4. Sink KML Generator UI

The object to be sunk is actually a 3D model in COLLADA format. You can download a 3D model from numerous online 3D warehouses, or you can create your own one using 3D tools such as Google Sketchup. In the above example, a 3D ball or globe is used. Once a 3D model for the object is selected, you can specify how many copies of the object to be generated and where exactly they are located. The location data in the input field has this format:

      lat0,lon0,elev0 lat1,lon1,elev1 lat2,lon2,elev2 ... 

The elevation here means seafloor (below sea level) and is in meters, not feet. You can read out the coordinates from Google Earth by hovering your mouse pointer over a desired location.

Each object will be assigned a random density (within certain range) so that they will sink in different speeds. The program also adds a gentle random spin to each object to enhance the visual effects. 

The following is a snip of KML generated by the program:

<?xml version="1.0" encoding="UTF-8"?>
<kml xmlns="http://earth.google.com/kml/2.1">
<Document>
   <name>Sinking Balls</name>
   <description><![CDATA[by <a href="http://tofuculture.com">Gong Liu</a>]]></description>
   <Folder>
      <name>Ball 0</name>
      <Placemark>
         <TimeSpan>
            <begin>2011-05-19T00:00:00</begin>
            <end>2011-05-19T00:00:01</end>
         </TimeSpan>
         <Model>
            <altitudeMode>relativeToGround</altitudeMode>
            <Location>
               <longitude>-125.379875</longitude>
               <latitude>36.693736</latitude>
               <altitude>0</altitude>
            </Location>
            <Orientation>
               <heading>206</heading>
               <tilt>169</tilt>
               <roll>10</roll>
            </Orientation>
            <Scale><x>1</x><y>1</y><z>1</z></Scale>
            <Link><href>files/Earth.dae</href></Link>
         </Model>
      </Placemark>
      <Placemark>
         <TimeSpan>
            <begin>2011-05-19T00:00:01</begin>
            <end>2011-05-19T00:00:02</end>
         </TimeSpan>
         <Model>
            <altitudeMode>relativeToGround</altitudeMode>
            <Location>
               <longitude>-125.379875</longitude>
               <latitude>36.693736</latitude>
               <altitude>-3.767087</altitude>
            </Location>
            <Orientation>
               <heading>238.4</heading>
               <tilt>169</tilt>
               <roll>10</roll>
            </Orientation>
            <Scale><x>1</x><y>1</y><z>1</z></Scale>
            <Link><href>files/Earth.dae</href></Link>
         </Model>
      </Placemark>
      ...
   </Folder>
   <Folder>
      <name>Ball 1</name>
      <Placemark>...</Placemark>
      <Placemark>...</Placemark>
      ...
   </Folder>
   ...
</Document>
</kml>

Notice that each object (ball) has its own folder so that when loaded in Google Earth they can be shown or hidden individually. 

Demonstration

The following series of screenshots shows the sinking objects (balls) in Google Earth's seafloor view.

Fig. 5. The objects are at sea level at the beginning

Fig. 6. Showing the path of each object and notice the spin 

Fig. 7. Sinking objects with seafloor as the background 

Fig. 8. Another view closer to the seafloor

You can download the sample KML file at the Downloads section below and play it alive on your own computer with Google Earth. If you are not bothered to install Google Earth or download the KML file, you can watch the following video I made. Enjoy!

Notice that the balls are not moving very smoothly at current one-second interval. This can be fixed by using smaller time interval when generating the KML. Also, it helps to use a joystick to browse around the seafloor terrain. It's like driving a submarine.

Downloads

You can download the program executable and a sample kmz file generated by the program from the following links: 

SinkObj.zip (8.44 kb)
SinkBalls.kmz (135.04 kb)

If you are interested in the C# source code, please contact me.

More Math Stuff

Equ. 3 belongs to a special type of differential equations called Riccati Equations. To solve a Riccati Equation, we need to know a particular solution of it. In our case, we can easily spot a particular solution such as: 

(13)

Now we introduce a new variable z:

(14)

or

(15)

Do a substitution of v for z in Equ. 3 and we have

(16)

Equ. 16 is a linear differential equation. The general solution is given by

(17)

Substituting z back for v in Equ. 17 yields

(18)

The constant C can be determined as

(19)

with this initial condition

(20)

v0 does not always have to be 0. For the situation where we want to drop a dead body to the ocean from an aircraft carrier, for example, before the body hits the water, it has already gained certain speed. Now substitute Equ. 19 into Equ. 18 and with some algebraic manipulations we get

(21)

To prove Equ. 9, integrate Equ. 21

(22)

With the initial condition

(23)

the constant C1 can be determined as

(24)

Substitute Equ. 24 into Equ. 22 and with some algebraic manipulations we get

(25)

About

A seasoned computer professional. A tofu culture evangelist...
more >>

Tag Cloud

Calendar

<<  April 2017  >>
MoTuWeThFrSaSu
272829303112
3456789
10111213141516
17181920212223
24252627282930
1234567

View posts in large calendar
Copyright © 2008-2011 Gong Liu. All rights reserved. | credits | contact me
The content on this site represents my own personal opinions, and does not reflect those of my employer in any way.