Monday 12 December 2016

Precipitation past and future for Ireland.

A  quick overview from Met Éireann's rainfall climate page:

Most of the eastern half of the country gets between 750 and 1000 (mm) of rainfall in the year. Rainfall in the west generally averages between 1000 and 1400 mm. In many mountainous districts rainfall exceeds 2000mm per year.

Rainfall shows great year to year variability. A 30 year running mean of the national annual rainfall indicated an increase in average national rainfall of approximately 70mm over the last two decades.

The average number of wet days (days 1mm or more of rain) ranges from about 150 days a year along the east and south east coasts, to about 225 days a year in parts of the west.


Annual rainfall map:


Average Annual Rainfall:


What do I need to look at to define a precipitation climate?

Precipitation is quite a noisy variable to deal with, and there are lots of ways to look at it. Climate Change Indices have been defined by the Expert Team on Climate Change Detection, Monitoring and Indices (ETCCDMI):
http://etccdi.pacificclimate.org/list_27_indices.shtml

Ones that I've used here are:
  • PRCPTOT: Total precipitation in wet (>1mm) days.
  • SDII: Simple Daily Intensity Index. Mean precipitation amount for wet days (mm/wet day)
  • R10mm: Heavy Precipitation Days. No. days >10mm
  • CDD: Max no. Consecutive Dry (<1mm) Days
  • CWD: Max no. Consecutive Wet (>=1mm) Days

Friday 8 July 2016

WRF, ORR2, Infiniband, Intel

Orr2 is the HPC cluster of the UCD School of Mathematics and Statistics.

I'd like to install WRF as an aid in teaching the MSc Climate Change.

ORR2 architecture

Orr2 consists of 20 compute nodes on the following queues ('qstat -f' will show all queues):
  • 2x64.q
    compute-0-0.local, compute-0-1.local
    2 nodes with 64-cores: AMD Bulldozer/Opteron. 2.1GHz, 128GB.
  • 68nht.q
    compute-1-0.local
    1 node with 12 cores: Intel Nehalem. 2.66GHz, 24GB, Infiniband
    compute-1-1.local - compute-1-7.local
    7 nodes with 8 cores: Intel Nehalem. 2.4GHz, 24GB, Infiniband 
  • 6x8i.q
    compute-2-0.local - compute-2-5.local
    6 nodes with 8 cores: Intel Core2. 2.5GHz, 32GB.
  • 4x8a.q
    compute-3-0.local - compute-3-3.local
    4 nodes with 8 cores: AMD Shanghai. 2.6GHz, 32GB.
Orr2 has Intel compilers and MVAPICH (MPI over InfiniBand). The Orr2 documentation has information on using the Intel compilers with mvapich.

Download WRF, geog and libraries

Wednesday 29 June 2016

GFS data for MCC WRF forecasts

The forecasts on the MCC web page (http://mathsci.ucd.ie/met/mcc-forecast.html) use global forecast data from NOAA's Global Forecast System (GFS: http://www.emc.ncep.noaa.gov/index.php?branch=GFS). GFS data are available on a global 0.25 degree grid, every hour.

GFS forecast data should be available for download 4 hours after forecast analysis time:

(ftp://ftp.ncep.noaa.gov/pub/data/nccf/com/gfs/prod)

I'd like to download a subset of the GFS data to run the MCC WRF forecast at a higher resolution for Ireland. To do this, I am going to use NOMADS g2subs

http://nomads.ncep.noaa.gov/cgi-bin/filter_gfs_0p25.pl?dir=%2Fgfs.2016062906
  • Use "make subregion": lon:-55 to 25, lat: 25 to 70
  • Select the option: "Show the URL only for web programming"
  • Open a terminal and set the data URL, then use curl to retrieve the data file:
bash$ URL="http://nomads.ncep.noaa.gov/cgi-bin/filter_gfs_0p25.pl..."
bash$ curl “$URL” -o my_grib_file1

This works!

Notes:

  • Don't use the analysis data file - it doesn't contain soil data.
  • You can see which fields are in the GFS file by using the WPS g2print.exe command.
  • Using all levels, all variables, and subregion (lon:-55 to 25, lat: 25 to 70) reduced the size of the analysis file from 176M to 24M. The file size becomes even smaller by requesting less variables.

Friday 3 June 2016

WRF on SONIC

These are my notes on WRF timing when running on multiple nodes on sonic at UCD.

The domain used for speedup tests was a 300 x 250 grid (namelist.wps):
(jupyter notebook: PlotWPSDomain.ipynb)

SONIC has a mixture of 24-core and 40-core (hyper-threaded) nodes. Only the 40-core nodes are guaranteed to have infiniband. There are cores with more nodes (e.g. the highmem node) but these mess up the MPI messaging. So, to ensure best/consistent performance, use the infiniband queue:

qsub -q infiniband ...

#PBS -l nodes=04:ppn=40
...
module purge
module load WRF

module list
...

time mpirun -np 64 --map-by ppr:1:core wrf.exe

The map by core is required to unsure that hyper-threading is not used.

I ran WRF using different numbers of nodes and cores. The IO time becomes significant for larger number of cores, as the others have to wait during IO. To get a fairer picture of speedup with more cores, I've done the following:

  • changed the history value in namelist.input to greater than the forecast range, so wrfout files are not written (apart from analysis)
  • stripped out only the compute timings from the rsl.out.0000 log file, using the following command, which skips timings at and 1 minute after each half hour, as the timings are larger then.

  • NC=04x64
    grep 'main:.*[2-9]:00' RUNDIR${NC}/rsl.out.0000 | awk '{print $9}' > ComputeTimings.${NC}
Here are the timings I got:




Friday 13 May 2016

CMPI5 Data

Downloading CMPI5 data.

Go here: https://pcmdi.llnl.gov/projects/cmip5/


Click "Login" link at the top-right and use OpenID and password.

Use Search to select model, experiment, variables, then download wget script for files.

Edit wget file to remove unwanted downloads.

I've retrieved HadGEM2-ES mean sea-level pressure (psl) data every six hours for the historical (1950-2005) and RCP85 (2006-2100) ensemble member r1i1p1. I'll use this to look at some storm track behaviour.