{-# LANGUAGE OverloadedStrings #-}
importControl.ApplicativeimportqualifiedData.ByteString.LazyasBLimportData.CsvimportqualifiedData.VectorasVdataPerson=Person{name::!String
, salary::!Int}instanceFromNamedRecordPersonwhere
parseNamedRecord r =Person<$> r .:"name"<*> r .:"salary"main::IO()
main =do
csvData <-BL.readFile"salaries.csv"case decodeByName csvData ofLeft err ->putStrLn err
Right (_, v) ->V.forM_ v $\ p ->putStrLn$ name p ++" earns "++show (salary p) ++" dollars"
There's no end to what people consider CSV data. Most programs don't
follow RFC4180 so one has to
make a judgment call which contributions to accept. Consequently, not
everything gets accepted, because then we'd end up with a (slow)
general purpose parsing library. There are plenty of those. The goal
is to roughly accept what the Python
csv module accepts.
The Python csv module (which is implemented in C) is also considered
the base-line for performance. Adding options (e.g. the above
mentioned parsing "flexibility") will have to be a trade off against
performance. There's been complaints about performance in the past,
therefore, if in doubt performance wins over features.
Last but not least, it's important to keep the dependency footprint
light, as each additional dependency incurs costs and risks in terms
of additional maintenance overhead and loss of flexibility. So adding
a new package dependency should only be done if that dependency is
known to be a reliable package and there's a clear benefit which
outweights the cost.
请发表评论