0

I've below code which is used to read a csv file and convert to hash. The Keys are dependent on the number of key columns that user need.

use warnings;
use strict;

my %hash;     
my $KeyCols = 2;
while (<DATA>) {
    chomp;
    my @cols = split /,/, $_, $KeyCols+1;
    next unless @cols > $KeyCols;
    my $v = pop @cols;
    my $k = join '', @cols;
    $hash{$k} = $v;
}

I need help in achieving the same logic using TEXT::CSV_XS package for efficiency. Please help.

1
  • 1
    What Text::CSV_XS is going to do for you is the split ... line, but correctly regardless of the many possible "funny" cases, and perhaps faster --- what you need is the most basic use of it, and once you have @cols then you can do your thing with it any way you would. So just open the docs, look at synopsis, and try it out. (Then there's many many posts about it here as well) Commented Jun 2, 2020 at 22:30

1 Answer 1

2

The real reason for using Text::CSV_XS is for correctness. It's not going to be faster than what you have, but it will work where yours will fail.

use Text::CSV_XS qw( );

my $csv = Text::CSV_XS->new({
    auto_diag => 2,
    binary    => 1,
});

my %hash;
while ( my $row = $csv->getline(\*DATA) ) { 
   $hash{ $row->[0] . $row->[1] } = $row;
}

Concatenating the fields together directly (without a separator) seems really odd.

The above makes the value an array of fields rather than CSV. If you want CSV as in the original, you will need to re-encode them into CSV.

my %hash;
while ( my $row = $csv->getline(\*DATA) ) { 
   my ($k1, $k2) = splice(@$row, 0, 2);
   $csv->combine(@$row);
   $hash{ $k1 . $k2 } = $csv->string();
}
Sign up to request clarification or add additional context in comments.

Comments

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.