copyright | lastupdated | subcollection | ||
---|---|---|---|---|
|
2021-08-02 |
AnalyticsEngine |
{:new_window: target="_blank"} {:shortdesc: .shortdesc} {:codeblock: .codeblock} {:screen: .screen} {:pre: .pre}
{: #getting-started-classic}
{{site.data.keyword.iae_full_notm}} provides a flexible framework to develop and deploy analytics applications in Apache Hadoop and Apache Spark. It allows you to create and manage clusters using the {{site.data.keyword.Bluemix_short}} interface, the {{site.data.keyword.Bluemix_short}} CLI and REST APIs.
Note: This topic refers to {{site.data.keyword.iae_full_notm}} service instances created using the classic plans only. For more information on {{site.data.keyword.iae_full_notm}} classic instances, see:
{: #getting-started-classic-1}
- Get a quick overview of {{site.data.keyword.iae_full_notm}} in this short video.
{: #getting-started-classic-2}
-
Watch how to get started using {{site.data.keyword.iae_full_notm}}.
In this video you will learn how to provision an {{site.data.keyword.iae_full_notm}} cluster from IBM Cloud, find out about options to manage the cluster, and see how to connect {{site.data.keyword.DSX_short}} to {{site.data.keyword.iae_full_notm}} to analyze data.
For details about the supported plans and on how to provision and configure your cluster, see the {{site.data.keyword.iae_full_notm}} documentation.
- Watch how to provision an {{site.data.keyword.iae_full_notm}} service instance through {{site.data.keyword.DSX_short}}.
{: #getting-started-classic-3}
- To learn how to get started, watch the video called Getting started with IBM Analytics Engine in IBM Cloud and run through the tutorial using sample code and data. Copy or download the notebook called Use Spark SQL to explore heating problems in Manhattan from the {{site.data.keyword.DSX_short}} community to try it for yourself!
- Learn how to use spark-submit to execute a Python script on an {{site.data.keyword.iae_full_notm}} cluster.
- Learn how to programmatically use {{site.data.keyword.iae_full_notm}} through this tutorial. Get access to sample scripts to start operationalizing your first applications.
- Get answers to some frequently asked questions about using {{site.data.keyword.iae_full_notm}}.
{: #getting-started-next-steps}
Now that you have provisioned a service instance and have created a cluster, you can start running jobs and managing your cluster:
- Run Hadoop MapReduce jobs.
- Run Spark Interactive jobs.
- Run Spark Batch jobs.
- Manage your cluster by using the CLI for various operations.