I have tried this for smaller array, so test it with yours.
I have tried this for smaller array, so test it with yours: import numpy as np Nbig = 100 Nsmall = 20 big = np. Arange(Nbig * Nbig). Reshape(Nbig, Nbig) # 100x100 small = big.
Reshape(Nsmall, Nbig/Nsmall, Nsmall, Nbig/Nsmall). Mean(3). Mean(1) An example with 6x6 -> 3x3: Nbig = 6 Nsmall = 3 big = np.
Arange(36). Reshape(6,6) array( 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35) small = big. Reshape(Nsmall, Nbig/Nsmall, Nsmall, Nbig/Nsmall).
Mean(3). Mean(1) array( 3.5, 5.5, 7.5, 15.5, 17.5, 19.5, 27.5, 29.5, 31.5).
This is pretty straightforward, although I feel like it could be faster: from __future__ import division import numpy as np Norig = 100 Ndown = 20 step = Norig//Ndown assert step == Norig/Ndown # ensure Ndown is an integer factor of Norig x = np. Arange(Norig*Norig). Reshape((Norig,Norig)) #for testing y = np.
Empty((Ndown,Ndown)) # for testing for yr,xr in enumerate(np. Arange(0,Norig,step)): for yc,xc in enumerate(np. Arange(0,Norig,step)): yyr,yc = np.
Mean(xxr:xr+step,xc:xc+step) You might also find scipy.signal. Decimate interesting. It applies a more sophisticated low-pass filter than simple averaging before downsampling the data, although you'd have to decimate one axis, then the other.
Average a 2D array over subarrays of size NxN: height, width = data. Shape data = average(split(average(split(data, width // N, axis=1), axis=-1), height // N, axis=1), axis=-1).
I cant really gove you an answer,but what I can give you is a way to a solution, that is you have to find the anglde that you relate to or peaks your interest. A good paper is one that people get drawn into because it reaches them ln some way.As for me WW11 to me, I think of the holocaust and the effect it had on the survivors, their families and those who stood by and did nothing until it was too late.