Skip to content

__getitem__ and scalar for neo objects #490

@samuelgarcia

Description

@samuelgarcia

here an issue from #454 by @bjoern1001001.

I have a question about this, because I have been working on SpikeTrain.__getitem__ for the last days. It's rather general, but I noticed this here.

I noticed here that what you wrote works in Epoch for things like epoch[1], while it would raise an error in SpikeTrain, because when you create the new Epoch in __getitem__, times is a quantity scalar, NOT an array or Epoch. Creating a SpikeTrain like this in __getitem__ via SpikeTrain(times=super(...).__getitem, ...) would raise an error here saying it's testing the length of an unsized object. Is it supposed to be okay that epochs are created with a scalar quantity instead of a quantity array? And if that is the case, shouldn't it be converted to an array then in order to ensure consistency?

A little example of what happens:

epoch = neo.Epoch([1,2,3]*pq.s)
ep1 = epoch[1] # ep1 is now an Epoch with times=2*pq.s, not [2]*pq.s or array([2]*pq.s)

This is because the following happens in __getitem__:
times = super(...).__getitem__(1)
Times now becomes a scalar quantity, because numpy __getitem__ returns a scalar instead of an array when called with int index. From this pq.Quantity.getitem creates a quantity, because it is not yet a quantity.

obj = Epoch(times=times, ...)
This is not a problem, because there is no test about whether times is an array or not
...
return obj # This returns obj, with times=2*pq.s

Should it be like this or should a test for this be introduced?

Apart from that I'd like to know if any of you has deeper knowledge of __getitem__ because I noticed together with @JuliaSprenger that there sometimes a scalar quantity is returned instead of a SpikeTrain object. I've been trying to change this but it fails. Trying to copy what you did, I noticed that this works for Epoch, but not for SpikeTrain.

Again some code to illustrate what I mean:

What currently happens:
st1 = new SpikeTrain(times=[1,2,3]*pq.s, ...)
testobj = st1[1]
testobj is now a (scalar) quantity object, not a SpikeTrain
print(testobject) # This prints 2 s

new_st = st1[0:1]
new_st is a SpikeTrain, because numpy returns a SpikeTrain array
print(new_st) # This prints [1] s

If one would do the same as you did in Epoch for SpikeTrain, it would raise an error
st2 = st1[1] # This would raise an error

What happens SpikeTrain __getitem__:

times = super(...).__getitem__(1)
times is now a scalar quantity just like in Epoch

obj = SpikeTrain(times=times, ...)
This will raise an error in SpikeTrain.__new__ (line 220), because times is a scalar, and thus an unsized object: TypeError: len() of unsized object

On the other hand, calling things like st1[0:1] will work just fine, because numpy then returns an array (or a subclass, which is SpikeTrain in this case)

I then tried creating a new SpikeTrain if I get a quantity by creating a list with a single entry:

if not isinstance(obj, SpikeTrain):
obj = SpikeTrain(times=obj.magnitude, t_stop=self.t_stop, units=self.units) # Error happens here

This works for most cases, but it fails whenever calling sorted(np.concatenate((st1, st2))), because apparently in concatenate a kind of 'intermediate' SpikeTrain is generated that doesn't contain any attributes.
The error I get then is that SpikeTrain object [referring to self here] has no attribute t_stop.
If anybody knows more about this stuff, I'd really appreciate your help.

Metadata

Metadata

Assignees

Type

No type

Projects

No projects

Milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions