5

I am trying to implement the SelectKBest algorithm on my data to get the best features out of it. For this I am first preprocessing my data using DictVectorizer and the data consists of 1061427 rows with 15 features. Each feature has many different values and I believe I am getting a memory error due to high cardinality.

I get the following error:

File "FeatureExtraction.py", line 30, in <module>
    quote_data = DV.fit_transform(quote_data).toarray()
File "/usr/lib64/python2.6/site-packages/scipy/sparse/compressed.py", line 563, in toarray
    return self.tocoo(copy=False).toarray()
File "/usr/lib64/python2.6/site-packages/scipy/sparse/coo.py", line 233, in toarray
    B = np.zeros(self.shape, dtype=self.dtype)
MemoryError

Is there any alternate way that I could do this? Why do I get a memory error when I am processing on a machine that has 256GB of RAM.

Any Help is appreciated!

14
  • 2
    Seems like your error comes from the toarray method and not from DictVectorizer. Do you have to turn it to array? Commented Jun 17, 2014 at 11:40
  • Yes.I have to convert it to an array.Is there any other way of doing it? Commented Jun 17, 2014 at 18:52
  • @TalKremerman I also tried removing the toarray() and changing the argument of Sparse= False and I still get the same error.Here is the code: DV = DictVectorizer(sparse=False) data = DV.fit_transform(data) and earlier I had written DV = DictVectorizer(sparse=True) data = DV.fit_transform(data).toarray() . Either way it is giving a two dimensional array of 0's and 1's which is what I need to input to SelectKBest. Commented Jun 17, 2014 at 19:23
  • How about trying with a pipeline? Commented Jun 17, 2014 at 21:47
  • @TalKremerman That doesn't change anything. Commented Jun 18, 2014 at 10:29

7 Answers 7

6
+50

I figured out the problem.

When I removed a column which had a very high cardinality the DictVectorizer works fine. That column had like millions of different unique values and hence the dictvectorizer was giving a memory error.

Sign up to request clarification or add additional context in comments.

1 Comment

Removing a column with a very high cardinality is not a solution and this wasn't the root problem. The problem was toarray(). DictVetorizer from sklearn is designed for this purpose - vectorizing categorical features with high cardinality. Read my comment below.
4

The problem was toarray(). DictVetorizer from sklearn (which is designed for vectorizing categorical features with high cardinality) outputs sparse matrices by default. You are running out of memory because you require the dense representation by calling fit_transform().toarray().

Just use:

quote_data = DV.fit_transform(quote_data)

Comments

2

If your data has high cardinality because it represents text, you can try using a resource-friendlier vectorizer like HashingVectorizer

Comments

1

While performing fit_transform, instead of passing the whole dictionary to it, create a dictionary with only unique occurences. Here is the an example:

Transform dictionary:

Before

[ {A:1,B:22.1,C:Red,D:AB12},
      {A:2,B:23.3,C:Blue,D:AB12},
  {A:3,B:20.2,C:Green,D:AB65},
    ]

After

    [ {A:1,B:22.1,C:Red,D:AB12},
      {C:Blue},
  {C:Green,D:AB65},
    ]

This saves a lot of space.

Comments

1

I was using DictVectorizer to transform categorical database entries into one hot vectors and was continually getting this memory error. I was making the following fatal flaw: d = DictVectorizer(sparse=False). When I would call d.transform() on some of the fields with 2000 or more categories, python would crash. The solution that worked was to instantiate DictVectorizer with sparse being True, which by the way is default behavior. If you are doing one hot representations of items with many categories, dense arrays are not the most efficient structure to use. Calling .toArray() in this case is very inefficient.

The purpose of the one hot vector in matrix multiplication is to select a row or column from some matrix. This can be done more efficiently simply by using the indices where a 1 exists in the vector. This is an implicit form of multiplication, that requires orders of magnitude less operations than is required of the explicit multiplication.

Comments

1

@Serendipity Using the fit_transform function, I also runned into the memory error. And removing a column was in my case not an option. So I removed .toarray() and the code worked fine.

I run two tests using a smaller dataset with and without the .toarray() option and in both cases it produced an identical matrix.

In short, removing .toarray() did the job!

Comments

0

In addition to the above answers, you may as well try using the storage-friendly LabelBinarizer() function to build your own custom vectorizer. Here is the code:

from sklearn.preprocessing import LabelBinarizer

def dictsToVecs(list_of_dicts):
    X = []

    for i in range(len(list_of_dicts[0].keys())):
        vals = [list(dict.values())[i] for dict in list_of_dicts]

        enc = LabelBinarizer()
        vals = enc.fit_transform(vals).tolist()
        print(vals)
        if len(X) == 0:
            X = vals
        else:
            dummy_res = [X[idx].extend(vals[idx]) for idx, element in enumerate(X)]

    return X

Further, in case of distinct train-test data sets, it could be helpful to save the binarizer instances for each item of the dictionaries once fitted at the train time, so as to call the transform() method by loading these at the test time.

Comments

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.