How to use the verde.get_region function in verde

To help you get started, we’ve selected a few verde examples, based on popular ways it is used in public projects.

Secure your code as it's written. Use Snyk Code to scan source code in minutes - no build needed - and fix issues immediately.

github fatiando / verde / dev / _downloads / f808b7408e6a6dfc54a537fefabea4db / spline_weights.py View on Github external
projection(*coordinates),
    data.velocity_up,
    weights=1 / data.std_up ** 2,
    random_state=0,
)
# Fit the model on the training set
chain.fit(*train)
# And calculate an R^2 score coefficient on the testing set. The best possible score
# (perfect prediction) is 1. This can tell us how good our spline is at predicting data
# that was not in the input dataset.
score = chain.score(*test)
print("\nScore: {:.3f}".format(score))

# Create a grid of the vertical velocity and mask it to only show points close to the
# actual data.
region = vd.get_region(coordinates)
grid_full = chain.grid(
    region=region,
    spacing=spacing,
    projection=projection,
    dims=["latitude", "longitude"],
    data_names=["velocity"],
)
grid = vd.distance_mask(
    (data.longitude, data.latitude),
    maxdist=5 * spacing * 111e3,
    grid=grid_full,
    projection=projection,
)

fig, axes = plt.subplots(
    1, 2, figsize=(9, 7), subplot_kw=dict(projection=ccrs.Mercator())
github fatiando / verde / dev / _downloads / 2f02b58842dd1effc10e8f9e9694358b / scipygridder.py View on Github external
reducer = vd.BlockReduce(reduction=np.median, spacing=spacing)
coordinates, bathymetry = reducer.filter(
    (data.longitude, data.latitude), data.bathymetry_m
)

# Project the data using pyproj so that we can use it as input for the gridder.
# We'll set the latitude of true scale to the mean latitude of the data.
projection = pyproj.Proj(proj="merc", lat_ts=data.latitude.mean())
proj_coordinates = projection(*coordinates)

# Now we can set up a gridder for the decimated data
grd = vd.ScipyGridder(method="cubic").fit(proj_coordinates, bathymetry)
print("Gridder used:", grd)

# Get the grid region in geographic coordinates
region = vd.get_region((data.longitude, data.latitude))
print("Data region:", region)

# The 'grid' method can still make a geographic grid if we pass in a projection function
# that converts lon, lat into the easting, northing coordinates that we used in 'fit'.
# This can be any function that takes lon, lat and returns x, y. In our case, it'll be
# the 'projection' variable that we created above. We'll also set the names of the grid
# dimensions and the name the data variable in our grid (the default would be 'scalars',
# which isn't very informative).
grid = grd.grid(
    region=region,
    spacing=spacing,
    projection=projection,
    dims=["latitude", "longitude"],
    data_names=["bathymetry_m"],
)
print("Generated geographic grid:")
github fatiando / verde / dev / _downloads / 8e59c4ed612021f145f74b62d5b36182 / weights.py View on Github external
# Interpolation with weights
# --------------------------
#
# The Green's functions based interpolation classes in Verde, like
# :class:`~verde.Spline`, can take input weights if you want to give less importance to
# some data points. In our case, the points with larger uncertainties shouldn't have the
# same influence in our gridded solution as the points with lower uncertainties.
#
# Let's setup a projection to grid our geographic data using the Cartesian spline
# gridder.
import pyproj

projection = pyproj.Proj(proj="merc", lat_ts=data.latitude.mean())
proj_coords = projection(data.longitude.values, data.latitude.values)

region = vd.get_region(coordinates)
spacing = 5 / 60

########################################################################################
# Now we can grid our data using a weighted spline. We'll use the block mean results
# with uncertainty based weights.
#
# Note that the weighted spline solution will only work on a non-exact interpolation. So
# we'll need to use some damping regularization or not use the data locations for the
# point forces. Here, we'll apply a bit of damping.
spline = vd.Chain(
    [
        # Convert the spacing to meters because Spline is a Cartesian gridder
        ("mean", vd.BlockMean(spacing=spacing * 111e3, uncertainty=True)),
        ("spline", vd.Spline(damping=1e-10)),
    ]
).fit(proj_coords, data.velocity_up, data.weights)
github fatiando / verde / tutorials / model_selection.py View on Github external
Once again, we'll start by importing the required packages and loading our sample data.
"""
import numpy as np
import matplotlib.pyplot as plt
import cartopy.crs as ccrs
import itertools
import pyproj
import verde as vd

data = vd.datasets.fetch_texas_wind()

# Use Mercator projection because Spline is a Cartesian gridder
projection = pyproj.Proj(proj="merc", lat_ts=data.latitude.mean())
proj_coords = projection(data.longitude.values, data.latitude.values)

region = vd.get_region((data.longitude, data.latitude))
# The desired grid spacing in degrees (converted to meters using 1 degree approx. 111km)
spacing = 15 / 60

########################################################################################
# Before we begin tuning, let's reiterate what the results were with the default
# parameters.

spline_default = vd.Spline()
score_default = np.mean(
    vd.cross_val_score(spline_default, proj_coords, data.air_temperature_c)
)
spline_default.fit(proj_coords, data.air_temperature_c)
print("R² with defaults:", score_default)


########################################################################################
github fatiando / verde / examples / distance_mask.py View on Github external
============================

Sometimes, data points are unevenly distributed. In such cases, we might not want to
have interpolated grid points that are too far from any data point. Function
:func:`verde.distance_mask` allows us to set such points to NaN or some other value.
"""
import matplotlib.pyplot as plt
import cartopy.crs as ccrs
import pyproj
import numpy as np
import verde as vd

# The Baja California bathymetry dataset has big gaps on land. We want to mask these
# gaps on a dummy grid that we'll generate over the region.
data = vd.datasets.fetch_baja_bathymetry()
region = vd.get_region((data.longitude, data.latitude))

# Generate the coordinates for a regular grid mask
spacing = 10 / 60
coordinates = vd.grid_coordinates(region, spacing=spacing)

# Generate a mask for points that are more than 2 grid spacings away from any data
# point. The mask is True for points that are within the maximum distance. Distance
# calculations in the mask are Cartesian only. We can provide a projection function to
# convert the coordinates before distances are calculated (Mercator in this case). In
# this case, the maximum distance is also Cartesian and must be converted from degrees
# to meters.
mask = vd.distance_mask(
    (data.longitude, data.latitude),
    maxdist=spacing * 2 * 111e3,
    coordinates=coordinates,
    projection=pyproj.Proj(proj="merc", lat_ts=data.latitude.mean()),
github fatiando / verde / dev / _downloads / 10c906fdfef20a81f9db581a0c5381e3 / grid_coordinates.py View on Github external
def plot_grid(ax, coordinates, linestyles="dotted", region=None, pad=50, **kwargs):
    "Plot the grid coordinates as dots and lines."
    data_region = vd.get_region(coordinates)
    ax.vlines(
        coordinates[0][0],
        ymin=data_region[2],
        ymax=data_region[3],
        linestyles=linestyles,
        zorder=0,
    )
    ax.hlines(
        coordinates[1][:, 1],
        xmin=data_region[0],
        xmax=data_region[1],
        linestyles=linestyles,
        zorder=0,
    )
    ax.scatter(*coordinates, **kwargs)
    if pad:
github fatiando / harmonica / data / examples / britain_magnetic.py View on Github external
s=0.001,
    cmap="seismic",
    vmin=-maxabs,
    vmax=maxabs,
    transform=ccrs.PlateCarree(),
)
plt.colorbar(
    tmp,
    ax=ax,
    label="total field magnetic anomaly [nT]",
    orientation="vertical",
    aspect=50,
    shrink=0.7,
    pad=0.1,
)
ax.set_extent(vd.get_region((data.longitude, data.latitude)))
ax.gridlines(draw_labels=True)
ax.coastlines(resolution="50m")
plt.tight_layout()
plt.show()
github fatiando / verde / examples / chain.py View on Github external
the coordinates as well but don't impact the predictions directly.

For example, say we want to decimate some data, calculate a polynomial trend, and fit a
gridder on the residual values (not the trend), but then make a grid of the full data
values (trend included). This is useful because decimation is common and many gridders
can't handle trends in data.
"""
import numpy as np
import matplotlib.pyplot as plt
import cartopy.crs as ccrs
import pyproj
import verde as vd

# Load the Rio de Janeiro total field magnetic anomaly data
data = vd.datasets.fetch_rio_magnetic()
region = vd.get_region((data.longitude, data.latitude))

# Create a projection for the data using pyproj so that we can use it as input for the
# gridder. We'll set the latitude of true scale to the mean latitude of the data.
projection = pyproj.Proj(proj="merc", lat_ts=data.latitude.mean())

# Create a chain that fits a 2nd degree trend, decimates the residuals using a blocked
# mean to avoid aliasing, and then fits a standard gridder to the residuals. The spacing
# for the blocked mean will be 0.5 arc-minutes (approximately converted to meters).
spacing = 0.5 / 60
chain = vd.Chain(
    [
        ("trend", vd.Trend(degree=2)),
        ("reduce", vd.BlockReduce(np.mean, spacing * 111e3)),
        ("spline", vd.Spline(damping=1e-8, mindist=50e3)),
    ]
)
github fatiando / verde / examples / vector_trend.py View on Github external
cb.set_label("meters/year")
    # Plot the original data
    ax.quiver(
        data.longitude.values,
        data.latitude.values,
        data.velocity_east.values,
        data.velocity_north.values,
        scale=0.2,
        transform=crs,
        color="k",
        width=0.001,
        label="Original data",
    )
    # Setup the map ticks
    vd.datasets.setup_california_gps_map(
        ax, land=None, ocean=None, region=vd.get_region((data.longitude, data.latitude))
    )
    ax.coastlines(color="white")
axes[0].legend(loc="lower left")
plt.tight_layout()
plt.show()
github fatiando / verde / dev / _downloads / 9c35741d00d56cadd9c9ce8cb60a83ac / model_evaluation.py View on Github external
data. Let's use these tools to evaluate the performance of a :class:`~verde.Spline` on
our sample air temperature data.
"""
import numpy as np
import matplotlib.pyplot as plt
import cartopy.crs as ccrs
import pyproj
import verde as vd

data = vd.datasets.fetch_texas_wind()

# Use Mercator projection because Spline is a Cartesian gridder
projection = pyproj.Proj(proj="merc", lat_ts=data.latitude.mean())
proj_coords = projection(data.longitude.values, data.latitude.values)

region = vd.get_region((data.longitude, data.latitude))
# For this data, we'll generate a grid with 15 arc-minute spacing
spacing = 15 / 60

########################################################################################
# Splitting the data
# ------------------
#
# We can't evaluate a gridder on the data that went into fitting it. The true test of a
# model is if it can correctly predict data that it hasn't seen before. scikit-learn has
# the :func:`sklearn.model_selection.train_test_split` function to separate a dataset
# into two parts: one for fitting the model (called *training* data) and a separate one
# for evaluating the model (called *testing* data). Using it with spatial data would
# involve some tedious array conversions so Verde implements
# :func:`verde.train_test_split` which does the same thing but takes coordinates and
# data arrays instead.
#