I used the default environment to install metpy
with:
conda install metpy
got errors.
TypeError: find_intersections takes 5 parameters, but 3 units were passed
Following this post. The issue seems to be a specific version dependency issue. In conda, the metpy is version 0.11, while version 0.12 solves the problem.
Here we just create a new conda environment for metpy
.
conda create -n metpy python=3.7
Use pip install
to install the metpy. After that, we found the cartopy
need to be installed. Then, everything goes fine.
Example figure:
CartoPy figure:
Updated 2020-05-11
Here we show our first “hello world” programm with tensorflow on chpc GPU node. Envirment:
import tensorflow as tf
import numpy as np
# use mnist data
mnist = tf.keras.datasets.mnist
print('mnist.load_data')
(x_train, y_train), (x_test, y_test) = mnist.load_data()
# normalize data
x_train = tf.keras.utils.normalize(x_train, axis=1)
x_test = tf.keras.utils.normalize(x_test, axis=1)
# sequential network
model = tf.keras.models.Sequential()
# input layer
model.add(tf.keras.layers.Flatten())
# hidden layers
model.add(tf.keras.layers.Dense(128, activation=tf.nn.relu))
model.add(tf.keras.layers.Dense(128, activation=tf.nn.relu))
# output layer
model.add(tf.keras.layers.Dense(10, activation=tf.nn.softmax))
model.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
print('model.fit')
model.fit(x_train,y_train, epochs=3)
val_loss, val_acc = model.evaluate(x_test, y_test)
print(val_loss, val_acc)
model.save('epic_num_reader.model')
new_model=tf.keras.models.load_model('epic_num_reader.model')
predictions = new_model.predict(np.array(x_test))
print(np.argmax(predictions[0]))
The execution flow without hardware info:
Epoch 1/3
2020-05-08 22:14:25.772225: I tensorflow/stream_executor/platform/default/dso_loader.cc:42] Successfully opened dynamic library libcublas.so.10
60000/60000 [==============================] - 6s 93us/sample - loss: 0.2649 - acc: 0.9225
Epoch 2/3
60000/60000 [==============================] - 5s 85us/sample - loss: 0.1056 - acc: 0.9682
Epoch 3/3
60000/60000 [==============================] - 5s 86us/sample - loss: 0.0721 - acc: 0.9769
10000/10000 [==============================] - 1s 59us/sample - loss: 0.0908 - acc: 0.9721
0.09084201904330402 0.9721
7
The predicted handwritting figure is “7”.
Actually, we found the GPU version of tf in this case is slower than the CPU version. This may due to the scalebility issue.
Updated 2020-05-08
Start to use python to deal with the WRF output. xarray is a very important package to deal with NetCDF
and HDF5
data.
As WRF files are not exactly CF compliant: you’ll need a special parser for the timestamp, the coordinate names are a bit exotic and do not correspond to the dimension names, they contain so-called staggered variables (and their correponding coordinates), etc.
salem is needed to parser wrf data. This is useful to slice wrf data in xarray.
ds=ds.sel(time=slice('2018-09-15','2018-09-17'))
Updated 2020-05-03