Canova is an Apache2 Licensed open-sourced tool for vectorizing raw data into usable vector formats across machine learning tools. Canova provides both an API and a command line interface (CLI).
Canova allows a practitioner to take raw data and produce open standard compliant vectorized data (svmLight, etc) in under 5 minutes. Current input data types supported out of the box:
- CSV Data
- Raw Text Data (Tweets, Text Documents, etc)
- Image Data
- Custom File Formats (MNIST)
We have several options to work with Canova. The most common way would be to download the last stable released tarball.
Download our latest release at: [ Coming Soon ]
Canova is actively developed and you can clone the repository, compile it and reference it in your project. First clone the ND4J repo and build compile prior to building Canova.
Clone the repository:
$ git clone https://github.com/deeplearning4j/Canova.git
Compile the project:
$ cd canova && mvn clean install -DskipTests -Dmaven.javadoc.skip=true
Add the local compiled file dependencies to your pom.xml file like the following example:
<dependency>
<groupId>org.nd4j</groupId>
<artifactId>canova-api-SNAPSHOT</artifactId>
<version>0.0.0.3</version>
</dependency>
Search for [canova](https://search.maven.org/#search%7Cga%7C1%7CCanova) to get a list of jars you can use
Add the dependency information into your pom.xml
- Check for open issues or open a fresh issue to start a discussion around a feature idea or a bug.
- If you feel uncomfortable or uncertain about an issue or your changes, feel free to contact us on Gitter using the link above.
- Fork the repository on GitHub to start making your changes to the master branch (or branch off of it).
- Write a test which shows that the bug was fixed or that the feature works as expected.
- Send a pull request and bug us on Gitter until it gets merged and published.
- Adding Pipelines for
- Timeseries
- Audio
- Video
- Parallel engine support
- Hadoop / MapReduce
- More Text Processing Techniques
- Kernel Hashing for Text Pipeline