Welcome to OStack Knowledge Sharing Community for programmer and developer-Open, Learning and Share
Welcome To Ask or Share your Answers For Others

Categories

0 votes
585 views
in Technique[技术] by (71.8m points)

d3.js - Pre-projected geometry v getting the browser to do it (aka efficiency v flexibility)

To improve the performance of my online maps, especially on smartphones, I'm following Mike Bostock's advice to prepare the geodata as much as possible before uploading it to the server (as per his command-line cartography). For example, I'm projecting the TopoJSON data, usually via d3.geoConicEqualArea(), at the command line rather than making the viewer's browser do this grunt work when loading the map.

However, I also want to use methods like .scale, .fitSize, .fitExtent and .translate dynamically, which means I can't "bake" the scale or translate values into the TopoJSON file beforehand.

Bostock recommends using d3.geoTransform() as a proxy for projections like d3.geoConicEqualArea() if you're working with already-projected data but still want to scale or translate it. For example, to flip a projection on the y-axis, he suggests:

var reflectY = d3.geoTransform({
      point: function(x, y) {
        this.stream.point(x, -y);
      }
    }),

    path = d3.geoPath()
        .projection(reflectY);

My question: If I use this D3 function, aren't I still forcing the viewer's browser to do a lot of data processing, which will worsen the performance? The point of pre-processing the data is to avoid this. Or am I overestimating the processing work involved in the d3.geoTransform() function above?

See Question&Answers more detail:os

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome To Ask or Share your Answers For Others

1 Answer

0 votes
by (71.8m points)

If I use this D3 function, aren't I still forcing the viewer's browser to do a lot of data processing, which will worsen the performance? The point of pre-processing the data is to avoid this. Or am I overestimating the processing work involved in the d3.geoTransform() function above?

Short Answer: You are overestimating the amount of work required to transform projected data.


Spherical Nature of D3 geoProjections

A d3 geoProjection is relatively unique. Many platforms, tools, or libraries take points consisting of latitude and longitude pairs and treat them as though they are on a Cartesian plane. This simplifies the math to a huge extent, but comes at a cost: paths follow Cartesian routing.

D3 treats longitude latitude points as what they are: points on a three dimensional ellipsoid. This costs more computationally but provides other benefits - such as routing path segments along great circle routes.

The extra computational costs d3 incurs in treating coordinates as points on a 3d globe are:

  1. Spherical Math

Take a look at a simple geographic projection before scaling, centering, etc:

function mercator(x, y) {
  return [x, Math.log(Math.tan(Math.PI / 4 + y / 2))];
}

This is likely to take longer than the transform you propose above.

  1. Pathing

On a Cartesian plane, lines between two points are easy, on a sphere, this is difficult. Take a line stretching from 179 degrees East to 179 degrees West - treating these as though they were on a Cartesian plane that is easy - draw a line across the earth. On a spherical earth, the line crosses the anti-meridian.

Consequently, in flattening the paths, sampling is required along the route, great circle distance between points requires bends, and therefore additional points.I'm not certain on the process of this in d3, but it certainly occurs.

Points on a cartesian plane don't require additional sampling - they are already flat, lines between points are straight. There is no need to detect if lines wrap around the earth another way.

Operations post Projection

Once projected, something like .fitSize will force additional work that is essentially what you are proposing with the d3.geoTransform(): the features need to be transformed and scaled based on their projected location and size.

This is very visible in d3v3 (before there was fitSize()) when autocentering features: calculations involve the svg extent of the projected features.


Basic Quasi Scientific Performance Comparison

Using a US census bureau shapefile of the United States, I created three geojson files:

  • One using WGS84 (long/lat) (file size: 389 kb)
  • One using geoproject in node with a plain d3.geoAlbers transform (file size: 386 kb)
  • One using geoproject in node with d3.geoAlbers().fitSize([500,500],d) (file size 385 kb)

The gold standard of speed should be option 3, the data is scaled and centered based on an anticipated display extent, no transform is required here and I will use a null projection to test it

I proceeded to project these to a 500x500 svg using:

//  For the unprojected data
var projection = d3.geoAlbers()
 .fitSize([500,500],wgs84);

var geoPath = d3.geoPath().projection(projection)


// for the projected but unscaled and uncentered data  
var transform = d3.geoIdentity()
   .fitSize([500,500],albers);

  var projectedPath = d3.geoPath()
    .projection(transform);

// for the projected, centered, and scaled data
var nullProjection = d3.geoPath()

Running this a few hundred times, I got average rendering times (data was preloaded) of:

  • 71 ms: WGS84
  • 33 ms: Projected but unscaled and uncentered
  • 21 ms: Projected, scaled, and centered

I feel safe in saying there is a significant performance bump in pre-projecting the data, regardless of if it is actually centered and scaled.

Note I used d3.geoIdentity() as opposed to d3.geoTransform() as it allows the use of fitSize(), and you can reflect if needed on the y: .reflectY(true);


与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome to OStack Knowledge Sharing Community for programmer and developer-Open, Learning and Share
Click Here to Ask a Question

...