This documentation is for Machine Learner 1.0.0. View documentation for the latest release.
||
Skip to end of metadata
Go to start of metadata

The following deployment patterns are supported for WSO2 ML.

Before deploying WSO2 ML, follow the instructions in Production Deployment Guidelines

 

Standalone mode

WSO2 ML is bundled with an inbuilt Apache Spark instance. In the standalone deployment pattern of WSO2 ML, Spark runs in local mode with one or more worker threads within the same machine. WSO2 ML is set to run in the standalone mode by default. The ML handles the driver program of the Spark instance which submits jobs to the Spark master.

The number of worker threads with which Spark is run can be set by the property spark.master in the < WSO2ML_HOME>/repository/conf/etc/spark-config.xml file. Possible values are as follows.

ValueDescription
localRuns Spark locally with one worker thread. There will be no multiple threads running in parallel.
local[k]Runs Spark locally with k number of threads. (K is ideally the number of cores in your machine).
local[*]Runs Spark locally with a number of worker threads that equals the number of logical cores in your machine.


With external Spark cluster

By default, WSO2 ML runs with an inbuilt Apache Spark instance. However, when working with big data, you can handle those large data sets in a distributed environment through WSO2 ML. You can carry out data pre-processing and model building processes on an Apache Spark cluster to share the workload between the nodes of the cluster. Using a Spark cluster optimizes the performance and reduces the time consumed to build and train a machine learning model for a large data set.

Follow the steps below to run the ML jobs by connecting WSO2 ML to an external Apache Spark cluster.

  • When following the instructions below you need to use Apache Spark version 1.4.1 with Apache Hadoop version 2.6 and later in the Apache Spark cluster.
  • The Spark deployment pattern can be Standalone, Yarn or Mesos.
  • WSO2 ML is unaware of the underlying configuration of the Spark cluster. It only interacts with the Spark master to which the jobs are submitted.
  1. Press Ctrl+C keys to shutdown the WSO2 ML server. For more information on shutting down WSO2 ML server, see Running the Product .

  2. Create a directory named <SPARK_HOME>/ml/ and copy the following jar files into it. These jar files can be found in the <ML_HOME>/repository/components/plugins directory.
    • org.wso2.carbon.ml.core_1.0.2.jar
    • org.wso2.carbon.ml.commons_1.0.2.jar
    • org.wso2.carbon.ml.database_1.0.2.jar
    • kryo_2.24.0.wso2v1.jar
  3. Create a file named spark-env.sh in the <SPARK_HOME>/conf/ directory and add the following entries.

    SPARK_MASTER_IP=127.0.0.1
    SPARK_CLASSPATH={SPARK_HOME}/ml/org.wso2.carbon.ml.core_1.0.2.jar:{SPARK_HOME}/ml/org.wso2.carbon.ml.commons_1.0.2.jar:{SPARK_HOME}/ml/org.wso2.carbon.ml.database_1.0.2.jar:{SPARK_HOME}/ml/kryo_2.24.0.wso2v1.jar
  4. Restart the external Spark cluster using the following commands:

    {SPARK_HOME}$ ./sbin/stop-all.sh
    {SPARK_HOME}$ ./sbin/start-all.sh 
  5. In the <ML_HOME>/repository/conf/etc/spark-config.xml file, enter the Spark master URL as the value of the < spark.master> property as shown in the example below.

    You can find the Spark Master URL in the Apache Spark Web UI as shown below.

    <property name="spark.master">{SPARK_MASTER_URL}</property>
  6. Restart the WSO2 ML server. For more information on restarting WSO2 ML server, see Running the Product.

With DAS as the Spark cluster

WSO2 DAS has an embedded Spark server which automatically creates a Spark cluster when the DAS is started in a clustered mode. Follow the steps below to run the ML jobs by connecting WSO2 ML to a WSO2 DAS cluster that serves as a Spark cluster.

  1. Setup DAS cluster using Carbon clustering. Configure it to have at least one worker node. For more information on setting up a DAS cluster, see Clustering Data Analytics Server.

  2. Install the following ML features in each DAS node from the P2 repository of your ML version. For more information on installing features, see Installing and Managing Features

    • Machine Learner Commons

    • Machine Learner Core

    • Machine Learner Database Service

  3. Stop all DAS nodes. For more information on stopping DAS nodes, see Running the Product in DAS documentation.
  4. Start DAS cluster again without initializing Spark contexts with CarbonAnalytics and ML features. Use the following options when starting the cluster.

    OptionPurpose
    -DdisableAnalyticsSparkCtx=trueTo disable CarbonAnalytics Spark context.
    -DdisableMLSparkCtx=trueTo disable ML Spark context.
  5. To configure ML to use DAS as the Spark cluster, set the following property in the <ML_HOME>/repository/conf/etc/spark-config.xml file.

     <property name="spark.master">{SPARK_MASTER}</property>


    Add the jars to Spark executor extra class path.

    • org.wso2.carbon.ml.commons_1.0.2.jar
    • org.wso2.carbon.ml.core_1.0.2.jar
    • org.wso2.carbon.ml.database_1.0.2.jar
    • spark-mllib_2.10_1.4.1.wso2v1.jar
    • arpack_combined_0.1.0.wso2v1.jar
    • breeze_2.10_0.11.1.wso2v1.jar
    • core_1.1.2.wso2v1.jar
    • jblas_1.2.3.wso2v1.jar
    • spire_2.10_0.7.4.wso2v1.jar 


    These should also be added to the Spark driver extra class path as Spark configuration properties in the <ML_HOME>/repository/conf/etc/spark-config.xml file as shown below.

    <property name="spark.driver.extraClassPath">{ML_HOME}/repository/components/plugins/org.wso2.carbon.ml.commons_1.0.2.jar:{ML_HOME}/repository/components/plugins/org.wso2.carbon.ml.core_1.0.2.jar:{ML_HOME}/repository/components/plugins/org.wso2.carbon.ml.database_1.0.2.jar:{ML_HOME}/repository/components/plugins/spark-mllib_2.10_1.4.1.wso2v1.jar:{ML_HOME}/repository/components/plugins/arpack_combined_0.1.0.wso2v1.jar:{ML_HOME}/repository/components/plugins/breeze_2.10_0.11.1.wso2v1.jar:{ML_HOME}/repository/components/plugins/core_1.1.2.wso2v1.jar:{ML_HOME}/repository/components/plugins/jblas_1.2.3.wso2v1.jar:{ML_HOME}/repository/components/plugins/spire_2.10_0.7.4.wso2v1.jar
    </property>
    <property name="spark.executor.extraClassPath">{ML_HOME}/repository/components/plugins/org.wso2.carbon.ml.commons_1.0.2.jar:{ML_HOME}/repository/components/plugins/org.wso2.carbon.ml.core_1.0.2.jar:{ML_HOME}/repository/components/plugins/org.wso2.carbon.ml.database_1.0.2.jar:{ML_HOME}/repository/components/plugins/spark-mllib_2.10_1.4.1.wso2v1.jar:{ML_HOME}/repository/components/plugins/arpack_combined_0.1.0.wso2v1.jar:{ML_HOME}/repository/components/plugins/breeze_2.10_0.11.1.wso2v1.jar:{ML_HOME}/repository/components/plugins/core_1.1.2.wso2v1.jar:{ML_HOME}/repository/components/plugins/jblas_1.2.3.wso2v1.jar:{ML_HOME}/repository/components/plugins/spire_2.10_0.7.4.wso2v1.jar
    </property>
  6. Enter values that are less than or equal to the allocated resources for Spark workers in the DAS cluster for the following two properties in the <ML_HOME>/repository/conf/etc/spark-config.xml file. This ensures that the ML does not call for unsatisfiable resources from the DAS Spark cluster.
    • spark.executor.memory

      <property name="spark.executor.memory">{memory_in_m/g}</property>

       

    • spark.executor.cores:

      <property name="spark.executor.cores">{number_of_cores}</property>

       

  7. Start the ML server. For more information on starting WSO2 ML server, see Running the Product.

 

  • No labels