If I put your sample in a file, I can load it into a structured numpy array with
In [45]: names=['Time','Node','Type','Metric_1','Metric_2']
In [46]: data = np.genfromtxt('stack38285208.txt', dtype=None, names=names, skip_header=1)
In [47]: data
Out[47]:
array([(0.0, 1, b'Abcd', 1234.5678, 9012.3456),
(0.0, 1, b'Efgh', 1234.5678, 9012.3456),
(0.01, 2, b'Abcd', 1234.5678, 9012.3456),
(0.01, 2, b'Efgh', 1234.5678, 9012.3456),
(0.02, 3, b'Abcd', 1234.5678, 9012.3456),
(0.02, 3, b'Efgh', 1234.5678, 9012.3456),
(0.03, 1, b'Abcd', 1234.5678, 9012.3456),
(0.03, 1, b'Efgh', 1234.5678, 9012.3456),
(0.04, 2, b'Abcd', 1234.5678, 9012.3456),
(0.04, 2, b'Efgh', 1234.5678, 9012.3456)],
dtype=[('Time', '<f8'), ('Node', '<i4'), ('Type', 'S4'), ('Metric_1', '<f8'), ('Metric_2', '<f8')])
I could not use names=True because you have names like Metric 1 which it would interpret as 2 column names. Hence the separate names list, and the skip_header. I'm using Python3 so the strings for S4 format are shown as b'Efgh'.
I can access fields (columns) by field name, and do various sorts of filter and math with those. For example:
fields where Type is b'Abcd':
In [63]: data['Type']==b'Abcd'
Out[63]: array([ True, False, True, False, True, False, True, False, True, False], dtype=bool)
and where Node is 1:
In [64]: data['Node']==1
Out[64]: array([ True, True, False, False, False, False, True, True, False, False], dtype=bool)
and together:
In [65]: (data['Node']==1)&(data['Type']==b'Abcd')
Out[65]: array([ True, False, False, False, False, False, True, False, False, False], dtype=bool)
In [66]: ind=(data['Node']==1)&(data['Type']==b'Abcd')
In [67]: data[ind]
Out[67]:
array([(0.0, 1, b'Abcd', 1234.5678, 9012.3456),
(0.03, 1, b'Abcd', 1234.5678, 9012.3456)],
dtype=[('Time', '<f8'), ('Node', '<i4'), ('Type', 'S4'), ('Metric_1', '<f8'), ('Metric_2', '<f8')])
I can take the mean of any of the numeric fields from this subset of records:
In [68]: data[ind]['Metric_1'].mean()
Out[68]: 1234.5678
In [69]: data[ind]['Metric_2'].mean()
Out[69]: 9012.3456000000006
I could also assign these fields to variables and work with those directly
In [70]: nodes=data['Node']
In [71]: types=data['Type']
In [72]: nodes
Out[72]: array([1, 1, 2, 2, 3, 3, 1, 1, 2, 2])
In [73]: types
Out[73]:
array([b'Abcd', b'Efgh', b'Abcd', b'Efgh', b'Abcd', b'Efgh', b'Abcd',
b'Efgh', b'Abcd', b'Efgh'],
dtype='|S4')
the 2 float fields, viewed as a 2 column array:
In [78]: metrics = data[['Metric_1','Metric_2']].view(('float',(2)))
In [79]: metrics
Out[79]:
array([[ 1234.5678, 9012.3456],
[ 1234.5678, 9012.3456],
[ 1234.5678, 9012.3456],
[ 1234.5678, 9012.3456],
[ 1234.5678, 9012.3456],
[ 1234.5678, 9012.3456],
[ 1234.5678, 9012.3456],
[ 1234.5678, 9012.3456],
[ 1234.5678, 9012.3456],
[ 1234.5678, 9012.3456]])
metrics where nodes are 1
In [83]: metrics[nodes==1,:]
Out[83]:
array([[ 1234.5678, 9012.3456],
[ 1234.5678, 9012.3456],
[ 1234.5678, 9012.3456],
[ 1234.5678, 9012.3456]])
In [84]: metrics[nodes==1,:].mean(axis=0) # column mean
Out[84]: array([ 1234.5678, 9012.3456])
numpy doesn't have a neat groupby function, though Pandas and itertools do.