||
Skip to end of metadata
Go to start of metadata

Follow the instructions below to deploy WSO2 Private PaaS (PPaaS) on a preferred IaaS, i.e., Kubernetes, Amazon Elastic Compute Cloud (EC2), OpenStack and Google Compute Engine (GCE), in a single JVM:

Step 1 - Configure external databases for PPaaS

For testing purposes you can run your PPaaS setup on the internal database (DB), which is the H2 DB. In the latter mentioned scenario, you do not need to setup the internal DB. However, in a production environment it is recommend to use an external RDBMS (e.g., MySQL).

 Click here for instructions...

Follow the instructions given below to configure PPaaS with external databases:

WSO2 Private PaaS 4.1.0 requires the following external databases: User database, Governance database and Config database. Therefore, before using the above databases, you need to create these DBs, as explained in Working with Databases, and configure Private PaaS as mentioned below.

  1. Copy the MySQL JDBC driver to the <PRIVATE_PAAS_HOME>/repository/components/lib directory.

  2. Create 3 empty databases, in the <PRIVATE_PAAS_HOME>/dbscripts directory, in your MySQL server with the following names and grant permission to the databases, so that they can be accessed through a remote server.

    ppaas_registry_db
    ppaas_user_db
    ppaas_config_db
     

  3. Navigate to the <PRIVATE_PAAS_HOME>/repository/conf/datasources directory and add the datasources that correspond to your DB in the master-datasources.xml file.
    Change the IP addresses and ports based on your environment.

    <datasource>
        <name>WSO2_GOVERNANCE_DB</name>
        <description>The datasource used for governance MySQL database</description>
        <jndiConfig>
            <name>jdbc/registry</name>
        </jndiConfig>
        <definition type="RDBMS">
            <configuration>
                <url>jdbc:mysql://[MYSQL_HOSTNAME]:[MYSQL_PORT]/ppaas_registry_db?autoReconnect=true</url>
                <username>[USERNAME]</username>
                <password>[PASSWORD]</password>
                <driverClassName>com.mysql.jdbc.Driver</driverClassName>
                <maxActive>50</maxActive>
                <maxWait>60000</maxWait>
                <testOnBorrow>true</testOnBorrow>
                <validationQuery>SELECT 1</validationQuery>
                <validationInterval>30000</validationInterval>
            </configuration>
        </definition>
     </datasource>
     <datasource>
        <name>WSO2_CONFIG_DB</name>
        <description>The datasource used for CONFIG MySQL database</description>
        <jndiConfig>
            <name>jdbc/ppaas_config</name>
        </jndiConfig>
        <definition type="RDBMS">
            <configuration>
                <url>jdbc:mysql://[MYSQL_HOSTNAME]:[MYSQL_PORT]/ppaas_config_db?autoReconnect=true</url>
                <username>[USERNAME]</username>
                <password>[PASSWORD]</password>
                <driverClassName>com.mysql.jdbc.Driver</driverClassName>
                <maxActive>50</maxActive>
                <maxWait>60000</maxWait>
                <testOnBorrow>true</testOnBorrow>
                <validationQuery>SELECT 1</validationQuery>
                <validationInterval>30000</validationInterval>
            </configuration>
        </definition>
     </datasource>
     <datasource>
        <name>WSO2_USER_DB</name>
        <description>The datasource used for userstore MySQL database</description>
        <jndiConfig>
            <name>jdbc/userstore</name>
        </jndiConfig>
        <definition type="RDBMS">
            <configuration>
                <url>jdbc:mysql://[MYSQL_HOSTNAME]:[MYSQL_PORT]/ppaas_user_db?autoReconnect=true</url>
                <username>[USERNAME]</username>
                <password>[PASSWORD]</password>
                <driverClassName>com.mysql.jdbc.Driver</driverClassName>
                <maxActive>50</maxActive>
                <maxWait>60000</maxWait>
                <testOnBorrow>true</testOnBorrow>
                <validationQuery>SELECT 1</validationQuery>
                <validationInterval>30000</validationInterval>
            </configuration>
        </definition>
    </datasource>
  4. Navigate to the <PRIVATE_PAAS_HOME>/repository/conf directory and change the datasources in both the user-mgt.xml and identity.xml files as follows: 

    <Property name="dataSource">jdbc/userstore</Property>
  5. Navigate to the <PRIVATE_PAAS_HOME>/repository/conf directory and add the following configurations in the registry.xml file. Change your IP addresses and ports based on your environment.

    <dbConfig name="governance">
        <dataSource>jdbc/registry</dataSource>
    </dbConfig>
    <remoteInstance url="https://localhost:9443/registry">
        <id>governance</id>
        <dbConfig>governance</dbConfig>
        <readOnly>false</readOnly>
        <registryRoot>/</registryRoot>
        <enableCache>true</enableCache>
        <cacheId>root@jdbc:mysql://[MYSQL_HOSTNAME]:[MYSQL_PORT]/ppaas_registry_db</cacheId>
    </remoteInstance>
    <dbConfig name="config">
        <dataSource>jdbc/ppaas_config</dataSource>
    </dbConfig>
    <remoteInstance url="https://localhost:9443/registry">
        <id>config</id>
        <dbConfig>config</dbConfig>
        <readOnly>false</readOnly>
        <registryRoot>/</registryRoot>
        <enableCache>true</enableCache>
        <cacheId>root@jdbc:mysql://[MYSQL_HOSTNAME]:[MYSQL_PORT]/ppaas_config_db</cacheId>
    </remoteInstance>
    <mount path="/_system/governance" overwrite="true">
        <instanceId>governance</instanceId>
        <targetPath>/_system/governance</targetPath>
    </mount>
    <mount path="/_system/config" overwrite="true">
        <instanceId>config</instanceId>
        <targetPath>/_system/config</targetPath>
    </mount>

 

Step 2 - Setup ActiveMQ

PPaaS uses the Message Broker (MB) to handle the communication among all the components in a loosely coupled manner. Currently, PPaaS uses Apache ActiveMQ; however, PPaaS supports any Advanced Message Queuing Protocol (AMQP) Message Broker.

 Click here for instructions...

Follow the instructions below to run ActiveMQ in a separate host:

  1. Download and unzip Apache ActiveMQ.

  2. Start ActiveMQ

    ./activemq start


 

 

Step 3 - Setup and start WSO2 CEP

By default, PPaaS is shipped with an embedded WSO2 Complex Event Processor (CEP). It is recommended to use the embedded CEP only for testing purposes and to configure CEP externally in a production environment. Furthermore, the compatible CEP versions differ based on whether the CEP is internal or external. WSO2 CEP 3.0.0 is embedded into PPaaS. However, PPaaS uses CEP 3.1.0 when working with CEP externally.

If you want to use CEP externally, prior to carrying out the steps below, download WSO2 CEP 3.1.0 and unzip the ZIP file.

 Click here for instructions...

Configuring CEP internally

Follow the instructions below to configure the embedded CEP:

Update the MB_HOSTNAME and MB_LISTEN_PORT with relevant values in the JMSOutputAdaptor.xml file, which is in the <PRIVATE_PAAS_HOME>/repository/deployment/server/outputeventadaptors directory, as follows:

property name="java.naming.provider.url">tcp://MB_HOSTNAME:MB_LISTEN_PORT</property>

 

Configuring CEP externally

Follow the instructions below to configure CEP with PPaaS as an external component:

Step 1 - Configure the Thrift client
  1. Enable thrift stats publishing in the thrift-client-config.xml file, which is in the <PRIVATE_PAAS_HOME>/repository/conf directory. Here you can set multiple CEP nodes for a High Availability (HA) setup.

    <cep>
       <node id="node-01">
          <statsPublisherEnabled>true</statsPublisherEnabled>
          <username>admin</username>
          <password>admin</password>
          <ip>localhost</ip>
          <port>7611</port>
       </node>
       <!--<node id="node-02">
          <statsPublisherEnabled>true</statsPublisherEnabled>
          <username>admin</username>
          <password>admin</password>
          <ip>10.10.1.1</ip>
          <port>7714</port>
       </node>-->
    </cep>
  2. Restart the PPaaS server without the internally embedded WSO2 CEP.

    sh wso2server.sh -Dprofile=cep-excluded
Step 2 - Configure CEP
  1. If you are configuring the external CEP in the High Availability (HA) mode, create a CEP HA deployment cluster in full-active-active mode. Note that it is recommended to setup CEP in a HA mode.

    Skip this step if you are setting up the external CEP in a single node.

    For more information on CEP clustering see the CEP clustering guide.
    When following the steps in the CEP clustering guide, note that you need to configure all the CEP nodes in the cluster as mentioned in step 3 and only then carryout the preceding steps.

  2. Download the CEP extension from the PPaaS product page on the WSO2 website and uncompress the file. The extracted distribution is referred to as <PPAAS_CEP_DISTRIBUTION>.
  3. Copy the following stream-manager-config.xml file from the <PPAAS_CEP_DISTRIBUTION>/wso2cep-3.1.0/streamdefinitions directory to the <CEP_HOME>/repository/conf directory. 
  4. Replace the content in the jndi.properties file, which is in the <CEP_HOME>/repository/conf directory, with the following configurations. Update the message-broker-ip and message-broker-port values.

    connectionfactoryName=TopicConnectionFactory
    java.naming.provider.url=tcp://[MB_IP]:[MB_Port]
    java.naming.factory.initial=org.apache.activemq.jndi.ActiveMQInitialContextFactory
    
    # register some topics in JNDI using the form
    # topic.[jndiName]=[physicalName]
    topic.lb-stats=lb-stats
    topic.instance-stats=instance-stats
    topic.summarized-health-stats=summarized-health-stats
    topic.topology=topology
    topic.ping=ping
  5. Add the following content to the siddhi.extension file, which is in the <CEP_HOME>/repository/conf/siddhi directory.

    org.apache.stratos.cep.extension.GradientFinderWindowProcessor
    org.apache.stratos.cep.extension.SecondDerivativeFinderWindowProcessor
    org.apache.stratos.cep.extension.FaultHandlingWindowProcessor
    org.apache.stratos.cep.extension.ConcatWindowProcessor
    org.apache.stratos.cep.extension.MemeberRequestHandlingCapabilityWindowProcessor
    org.apache.stratos.cep.extension.SystemTimeWindowProcessor
  6. Copy the  following JARs, which are in the <PPAAS_CEP_DISTRIBUTION>/wso2cep-3.1.0/lib directory to the <CEP_HOME>/repository/components/lib directory.

    • org.apache.stratos.cep.310.extension-4.1.5.jar
  7. Copy the following JARs, which are in the <PPAAS_CEP_DISTRIBUTION>/lib directory to the <CEP_HOME>/repository/components/lib directory.

    • org.apache.stratos.messaging-4.1.x.jar

    • org.apache.stratos.common-4.1.x.jar

  8. Download any dependencies on ActiveMQ 5.10.0 or the latest stable ActiveMQ TAR file from activemq.apache.org. The folder path of this file is referred to as <ACTIVEMQ_HOME>. Copy the following ActiveMQ client JARSs from the <ACTIVEMQ_HOME>/lib directory to the <CEP_HOME>/repository/components/lib directory.

    • activemq-broker-5.10.0.jar 

    • activemq-client-5.10.0.jar 

    • geronimo-j2ee-management_1.1_spec-1.0.1.jar 

    • geronimo-jms_1.1_spec-1.1.1.jar 

    • hawtbuf-1.10.jar

  9. Download the commons-lang3-3.4.jar files from commons.apache.org and commons-logging-1.2.jar files from commons.apache.org. Copy the downloaded files to the  <CEP_HOME>/repository/components/lib directory.
  10. Copy the following files from the <PPAAS_CEP_DISTRIBUTION>/wso2cep-3.1.0/eventbuilders directory, which you downloaded in step 2.2, to the <CEP_HOME>/repository/deployment/server/eventbuilders directory:
    • HealthStatisticsEventBuilder.xml
    • LoadBalancerStatisticsEventBuilder.xml
  11. Copy the following file from the <PPAAS_CEP_DISTRIBUTION>/wso2cep-3.1.0/inputeventadaptors directory, which you downloaded in step 2.2, to the <CEP_HOME>/repository/deployment/server/inputeventadaptors directory:
    • DefaultWSO2EventInputAdaptor.xml
  12. Copy the <PPAAS_CEP_DISTRIBUTION>/wso2cep-3.1.0/outputeventadaptors/JMSOutputAdaptor.xml file, which you downloaded in step 2.2, to the <CEP_HOME>/repository/deployment/server/outputeventadaptors directory:
  13. Update the MB_HOSTNAME and MB_LISTEN_PORT with relevant values in the JMSOutputAdaptor.xml file, which you copied in the above step, as follows:

    property name="java.naming.provider.url">tcp://MB_HOSTNAME:MB_LISTEN_PORT</property>
  14. Copy the following files from the <PPAAS_CEP_DISTRIBUTION>/wso2cep-3.1.0/executionplans directory, which you downloaded in step 2.2, to the <CEP_HOME>/repository/deployment/server/executionplans directory:
    • AverageHeathRequest.xml
    • AverageInFlightRequestsFinder.xml
    • GradientOfHealthRequest.xml
    • GradientOfRequestsInFlightFinder.xml
    • SecondDerivativeOfHealthRequest.xml
    • SecondDerivativeOfRequestsInFlightFinder.xml
  15. If you are setting up the external CEP in a single node, change the siddhi.enable.distibuted.processing property, in all the latter mentioned CEP 3.1.0 execution plans, from RedundantMode to false.
  16. Copy the following files from the <PPAAS_CEP_DISTRIBUTION>/wso2cep-3.1.0/eventformatters directory, which you downloaded in step 2.2, to the <CEP_HOME>/repository/deployment/server/eventformatters directory:
    • AverageInFlightRequestsEventFormatter.xml
    • AverageLoadAverageEventFormatter.xml
    • AverageMemoryConsumptionEventFormatter.xml
    • FaultMessageEventFormatter.xml
    • GradientInFlightRequestsEventFormatter.xml
    • GradientLoadAverageEventFormatter.xml
    • GradientMemoryConsumptionEventFormatter.xml
    • MemberAverageLoadAverageEventFormatter.xml
    • MemberAverageMemoryConsumptionEventFormatter.xml
    • MemberGradientLoadAverageEventFormatter.xml
    • MemberGradientMemoryConsumptionEventFormatter.xml
    • MemberSecondDerivativeLoadAverageEventFormatter.xml
    • MemberSecondDerivativeMemoryConsumptionEventFormatter.xml
    • SecondDerivativeInFlightRequestsEventFormatter.xml
    • SecondDerivativeLoadAverageEventFormatter.xml
    • SecondDerivativeMemoryConsumptionEventFormatter.xml
  17. Add the CEP URLs as a payload parameter to the network partition. 

    If you are deploying Private PaaS on Kubernetes, then add the CEP URLs to the Kubernetes cluster.

    Example: 

    {
        "name": "payload_parameter.CEP_URLS",
        "value": "192.168.0.1:7712,192.168.0.2:7711"
    }

If you want to use CEP externally, after you have successfully configured CEP, start the CEP server:

This is only applicable if you have configured CEP 3.1.0 externally.

./wso2server.sh

 

Step 4 - Setup and start WSO2 DAS (Optional)

Skip this step if you do not want to enable monitoring and metering in PPaaS using DAS. Even though this step is optional we recommend that you enable monitoring and metering in PPaaS.

Optionally, you can configure PPaaS to work with WSO2 Data Analytics Server (DAS), so that it can handle the monitoring and metering aspect related to PPaaS.

If you want to use DAS with PPaaS, prior to carrying out the steps below, download WSO2 DAS 3.0.0 and unzip the ZIP file.

 Click here for instructions...

Use MySQL 5.6 and the 5.1.x MySQL Connector for Java when carrying out the following configurations.

Follow the instructions below to manually setup DAS with PPaaS:

Step 1 - Configure PPaaS

  1. Enable thrift stats publishing with the DAS_HOSTNAME and DAS_TCP_PORT values in the thrift-client-config.xml file, which is in the <PRIVATE_PAAS_HOME>/repository/conf directory. If needed, you can set multiple DAS nodes for a High Availability (HA) setup.

    <!-- Apache thrift client configuration for publishing statistics to WSO2 CEP and WSO2 DAS-->
    <thriftClientConfiguration>
            .
            .
            .
           <das>
                <node id="node-01">
                     <statsPublisherEnabled>false</statsPublisherEnabled>
                     <username>admin</username>
                     <password>admin</password>
                     <ip>[DAS_HOSTNAME]</ip>
                     <port>[DAS_TCP_PORT]</port>
                </node>
                <!--<node id="node-02">
                     <statsPublisherEnabled>true</statsPublisherEnabled>
                     <username>admin</username>
                     <password>admin</password>
                     <ip>localhost</ip>
                     <port>7613</port>
                </node>-->
           </das>
       </config>
    </thriftClientConfiguration>
  2. Configure the Private PaaS metering dashboard URL with the DAS_HOSTNAME and DAS_PORTAL_PORT values in the <PRIVATE_PAAS_HOME>/repository/conf/cartridge-config.properties file as follows:

    das.metering.dashboard.url=https://<DAS_HOSTNAME>:<DAS_PORTAL_PORT>/portal/dashboards/metering-dashboard

     

  3. Configure the PPaaS monitoring dashboard URL with the DAS_HOSTNAME and DAS_PORTAL_PORT values in the <PRIVATE_PAAS_HOME>/repository/conf/cartridge-config.properties file as follows:

    das.monitoring.dashboard.url=https://<DAS_HOSTNAME>:<DAS_PORTAL_PORT>/portal/dashboards/monitoring-dashboard

Step 2 - Configure DAS

  1. Create the ANALYTICS_FS_DB, ANALYTICS_EVENT_STORE and ANALYTICS_PROCESSED_STORE databases in MySQL using the following MySQL scripts:

    CREATE DATABASE ANALYTICS_FS_DB;
    CREATE DATABASE ANALYTICS_EVENT_STORE;
    CREATE DATABASE ANALYTICS_PROCESSED_DATA_STORE;
  2. Configure DAS analytics-datasources.xml file, which is in the <DAS_HOME>/repository/conf/datasources directory, as follows to create the ANALYTICS_FS_DB, ANALYTICS_EVENT_STORE and ANALYTICS_PROCESSED_STORE datasources.

    <datasources-configuration>
       <providers>
          <provider>org.wso2.carbon.ndatasource.rdbms.RDBMSDataSourceReader</provider>
       </providers>
       <datasources>
          <datasource>
             <name>WSO2_ANALYTICS_FS_DB</name>
             <description>The datasource used for analytics file system</description>
             <definition type="RDBMS">
                <configuration>
                   <url>jdbc:mysql://127.0.0.1:3306/ANALYTICS_FS_DB</url>
                   <username>root</username>
                   <password>root</password>
                   <driverClassName>com.mysql.jdbc.Driver</driverClassName>
                   <maxActive>50</maxActive>
                   <maxWait>60000</maxWait>
                   <testOnBorrow>true</testOnBorrow>
                   <validationQuery>SELECT 1</validationQuery>
                   <validationInterval>30000</validationInterval>
                   <defaultAutoCommit>false</defaultAutoCommit>
                </configuration>
             </definition>
          </datasource>
          <datasource>
             <name>WSO2_ANALYTICS_EVENT_STORE_DB</name>
             <description>The datasource used for analytics record store</description>
             <definition type="RDBMS">
                <configuration>
                   <url>jdbc:mysql://127.0.0.1:3306/ANALYTICS_EVENT_STORE</url>
                   <username>root</username>
                   <password>root</password>
                   <driverClassName>com.mysql.jdbc.Driver</driverClassName>
                   <maxActive>50</maxActive>
                   <maxWait>60000</maxWait>
                   <testOnBorrow>true</testOnBorrow>
                   <validationQuery>SELECT 1</validationQuery>
                   <validationInterval>30000</validationInterval>
                   <defaultAutoCommit>false</defaultAutoCommit>
                </configuration>
             </definition>
          </datasource>
          <datasource>
             <name>WSO2_ANALYTICS_PROCESSED_DATA_STORE_DB</name>
             <description>The datasource used for analytics record store</description>
             <definition type="RDBMS">
                <configuration>
                   <url>jdbc:mysql://127.0.0.1:3306/ANALYTICS_PROCESSED_DATA_STORE</url>
                   <username>root</username>
                   <password>root</password>
                   <driverClassName>com.mysql.jdbc.Driver</driverClassName>
                   <maxActive>50</maxActive>
                   <maxWait>60000</maxWait>
                   <testOnBorrow>true</testOnBorrow>
                   <validationQuery>SELECT 1</validationQuery>
                   <validationInterval>30000</validationInterval>
                   <defaultAutoCommit>false</defaultAutoCommit>
                </configuration>
             </definition>
          </datasource>
       </datasources>
    </datasources-configuration>
  3. Set the analytics datasources created in above step (WSO2_ANALYTICS_FS_DB, WSO2_ANALYTICS_EVENT_STORE_DB and WSO2_ANALYTICS_PROCESSED_STORE_DB) in the DAS analytics-config.xml file, which is in the <DAS_HOME>/repository/conf/analytics directory.

    <analytics-dataservice-configuration>
       <!-- The name of the primary record store -->
       <primaryRecordStore>EVENT_STORE</primaryRecordStore>
       <!-- The name of the index staging record store -->
       <indexStagingRecordStore>INDEX_STAGING_STORE</indexStagingRecordStore>
       <!-- Analytics File System - properties related to index storage implementation -->
       <analytics-file-system>
          <implementation>org.wso2.carbon.analytics.datasource.rdbms.RDBMSAnalyticsFileSystem</implementation>
          <properties>
                <!-- the data source name mentioned in data sources configuration -->
                <property name="datasource">WSO2_ANALYTICS_FS_DB</property>
                <property name="category">large_dataset_optimized</property>
          </properties>
       </analytics-file-system>
       <!-- Analytics Record Store - properties related to record storage implementation -->
       <analytics-record-store name="EVENT_STORE">
          <implementation>org.wso2.carbon.analytics.datasource.rdbms.RDBMSAnalyticsRecordStore</implementation>
          <properties>
                <property name="datasource">WSO2_ANALYTICS_EVENT_STORE_DB</property>
                <property name="category">large_dataset_optimized</property>
          </properties>
       </analytics-record-store>
       <analytics-record-store name="INDEX_STAGING_STORE">
          <implementation>org.wso2.carbon.analytics.datasource.rdbms.RDBMSAnalyticsRecordStore</implementation>
          <properties>
                <property name="datasource">WSO2_ANALYTICS_EVENT_STORE_DB</property>
                <property name="category">limited_dataset_optimized</property>
          </properties>
       </analytics-record-store>
       <analytics-record-store name = "PROCESSED_DATA_STORE">
          <implementation>org.wso2.carbon.analytics.datasource.rdbms.RDBMSAnalyticsRecordStore</implementation>
          <properties>
                <property name="datasource">WSO2_ANALYTICS_PROCESSED_DATA_STORE_DB</property>
                <property name="category">large_dataset_optimized</property>
          </properties>
       </analytics-record-store>
       <!-- The data indexing analyzer implementation -->
       <analytics-lucene-analyzer>
       	<implementation>org.apache.lucene.analysis.standard.StandardAnalyzer</implementation>
       </analytics-lucene-analyzer>
       <!-- The maximum number of threads used for indexing per node, -1 signals to aute detect the optimum value,
            where it would be equal to (number of CPU cores in the system - 1) -->
       <indexingThreadCount>-1</indexingThreadCount>
       <!-- The number of index shards, should be equal or higher to the number of indexing nodes that is going to be working,
            ideal count being 'number of indexing nodes * [CPU cores used for indexing per node]' -->
       <shardCount>6</shardCount>
       <!-- Data purging related configuration -->
       <analytics-data-purging>
          <!-- Below entry will indicate purging is enable or not. If user wants to enable data purging for cluster then this property
           need to be enable in all nodes -->
          <purging-enable>false</purging-enable>
          <cron-expression>0 0 0 * * ?</cron-expression>
          <!-- Tables that need include to purging. Use regex expression to specify the table name that need include to purging.-->
          <purge-include-tables>
             <table>.*</table>
             <!--<table>.*jmx.*</table>-->
          </purge-include-tables>
          <!-- All records that insert before the specified retention time will be eligible to purge -->
          <data-retention-days>365</data-retention-days>
       </analytics-data-purging>
       <!-- Receiver/Indexing flow-control configuration -->
       <analytics-receiver-indexing-flow-control enabled = "true">
           <!-- maximum number of records that can be in index staging area before receiving is throttled -->
           <recordReceivingHighThreshold>10000</recordReceivingHighThreshold>
           <!-- the limit on number of records to be lower than, to reduce throttling -->
           <recordReceivingLowThreshold>5000</recordReceivingLowThreshold>    
       </analytics-receiver-indexing-flow-control>
    </analytics-dataservice-configuration>
  4. Add the MySQL Java connector 5.1.x JAR file, which is supported by MYSQL 5.6, in the <DAS_HOME>/repository/components/lib directory.


Step 2.1 - Download the DAS extension distribution

Download the DAS extension from the PPaaS product page on the WSO2 website and uncompress the file. The extracted distribution is referred to as <PPAAS_DAS_DISTRIBUTION>.

 

Step 2.2 - Create PPaaS Metering Dashboard with DAS

  1. Add the org.apache.stratos.das.extension-4.1.5.jar file, which is in the <PPAAS_DAS_DISTRIBUTION>/lib directory, into the <DAS_HOME>/repository/components/lib directory.

  2. Add the following Java class path into the spark-udf-config.xml file in the <DAS_HOME>/repository/conf/analytics/spark directory.

    <class-name>org.apache.stratos.das.extension.TimeUDF</class-name>
  3. Add Jaggery files, which are in the <PPAAS_DAS_DISTRIBUTION>/metering-dashboard/jaggery-files directory into the <DAS_HOME>/repository/deployment/server/jaggeryapps/portal/controllers/apis directory.

  4. Manually create MySQL databases and tables using the queries, which are in the <PPAAS_DAS_DISTRIBUTION>/metering-dashboard/metering-mysqlscript.sql file. 

    CREATE DATABASE IF NOT EXISTS ANALYTICS_FS_DB;
    CREATE DATABASE IF NOT EXISTS ANALYTICS_EVENT_STORE;
    CREATE DATABASE IF NOT EXISTS ANALYTICS_PROCESSED_DATA_STORE;
    CREATE TABLE ANALYTICS_PROCESSED_DATA_STORE.MEMBER_STATUS(Time long, ApplicationId VARCHAR(150), ClusterAlias VARCHAR(150), MemberId VARCHAR(150), MemberStatus VARCHAR(50));
    CREATE TABLE ANALYTICS_PROCESSED_DATA_STORE.MEMBER_COUNT(Time long, ApplicationId VARCHAR(150), ClusterAlias VARCHAR(150), CreatedInstanceCount int, InitializedInstanceCount int, ActiveInstanceCount int, TerminatedInstanceCount int);
    CREATE TABLE ANALYTICS_PROCESSED_DATA_STORE.MEMBER_INFORMATION(MemberId VARCHAR(150), InstanceType VARCHAR(150), ImageId VARCHAR(150), HostName VARCHAR(150), PrivateIPAddresses VARCHAR(150), PublicIPAddresses VARCHAR(150), Hypervisor VARCHAR(150), CPU VARCHAR(10) , RAM VARCHAR(10), OSName VARCHAR(150), OSVersion VARCHAR(150));
  5. Apply a WSO2 User Engagement Server (UES) patch to the DAS dashboard.
    You need to do this to populate the metering dashboard.

    1. Copy the ues-gadgets.js and the ues-pubsub.js files from the <PPAAS_DAS_DISTRIBUTION>/metering-dashboard/ues-patch directory into the <DAS_HOME>/repository/deployment/server/jaggeryapps/portal/js directory.

    2. Copy the dashboard.jag file from the <PPAAS_DAS_DISTRIBUTION>/metering-dashboard/ues-patch directory into the <DAS_HOME>/repository/deployment/server/jaggeryapps/portal/theme/templates directory.

  6. Add the ppaas-metering-service.car file, which is in the <PPAAS_DAS_DISTRIBUTION>/metering-dashboard directory, into the <DAS_HOME>/repository/deployment/server/carbonapps directory to generate the metering dashboard.

    If the <DAS_HOME>/repository/deployment/server/carbonapps folder does not exist, initially create the folder before moving the CAR file.

    You can navigate to the metering dashboard from the Private PaaS application topology view at the application or cluster level as shown below.


    The following is a sample metering dashboard:

Step 2.3 - Create the PPaaS Monitoring Dashboard with DAS

  1. Add the Jaggery files, which are in the <PPAAS_DAS_DISTRIBUTION>/monitoring-dashboard/jaggery-files directory into the <DAS_HOME>/repository/deployment/server/jaggeryapps/portal/controllers/apis directory.
  2. Manually create the MySQL database and tables using the queries in the <PPAAS_DAS_DISTRIBUTION>/monitoring-dashboard/jaggery-files/monitoring-mysqlscript.sql file. 

    CREATE DATABASE IF NOT EXISTS ANALYTICS_FS_DB;
    CREATE DATABASE IF NOT EXISTS ANALYTICS_EVENT_STORE;
    CREATE DATABASE IF NOT EXISTS ANALYTICS_PROCESSED_DATA_STORE;
    CREATE TABLE ANALYTICS_EVENT_STORE.AVERAGE_MEMORY_CONSUMPTION_STATS(Time long, ClusterId VARCHAR(150), ClusterInstanceId VARCHAR(150), NetworkPartitionId VARCHAR(150), Value DOUBLE);
    CREATE TABLE ANALYTICS_EVENT_STORE.MEMBER_AVERAGE_MEMORY_CONSUMPTION_STATS(Time long, MemberId VARCHAR(150), ClusterId VARCHAR(150), ClusterInstanceId VARCHAR(150), NetworkPartitionId VARCHAR(150), Value DOUBLE);
    CREATE TABLE ANALYTICS_EVENT_STORE.AVERAGE_LOAD_AVERAGE_STATS(Time long, ClusterId VARCHAR(150), ClusterInstanceId VARCHAR(150), NetworkPartitionId VARCHAR(150), Value DOUBLE);
    CREATE TABLE ANALYTICS_EVENT_STORE.MEMBER_AVERAGE_LOAD_AVERAGE_STATS(Time long, MemberId VARCHAR(150), ClusterId VARCHAR(150), ClusterInstanceId VARCHAR(150), NetworkPartitionId VARCHAR(150), Value DOUBLE);
    CREATE TABLE ANALYTICS_EVENT_STORE.AVERAGE_IN_FLIGHT_REQUESTS(Time long, ClusterId VARCHAR(150), ClusterInstanceId VARCHAR(150), NetworkPartitionId VARCHAR(150), COUNT DOUBLE);
    CREATE TABLE ANALYTICS_EVENT_STORE.SCALING_DETAILS(Time VARCHAR(50), ScalingDecisionId VARCHAR(150), ClusterId VARCHAR(150), MinInstanceCount INT, MaxInstanceCount INT, RIFPredicted INT, RIFThreshold INT ,RIFRequiredInstances INT, MCPredicted INT, MCThreshold INT, MCRequiredInstances INT ,LAPredicted INT, LAThreshold INT,LARequiredInstances INT,RequiredInstanceCount INT ,ActiveInstanceCount INT, AdditionalInstanceCount INT, ScalingReason VARCHAR(150));
  3. Copy the CEP EventFormatter artifacts, which are in the <PPAAS_DAS_DISTRIBUTION>/wso2cep-<VERSION>/eventformatters directory, into the <CEP_HOME>/repository/deployment/server/eventformatters directory.
  4. Copy CEP OutputEventAdapter artifacts, which are in the <PPAAS_DAS_DISTRIBUTION>/wso2cep-<VERSION>/outputeventadaptors directory, into the <CEP_HOME>/repository/deployment/server/outputeventadaptors directory and update the receiverURL and authenticatorURL with the DAS_HOSTNAME and DAS_TCP_PORT and DAS_SSL_PORT values as follows:

    <outputEventAdaptor name="DefaultWSO2EventOutputAdaptor"
      statistics="disable" trace="disable" type="wso2event" xmlns="http://wso2.org/carbon/eventadaptormanager">
      <property name="username">admin</property>
      <property name="receiverURL">tcp://<DAS_HOSTNAME>:<DAS_TCP_PORT></property>
      <property name="password">admin</property>
      <property name="authenticatorURL">ssl://<DAS_HOSTNAME>:<DAS_SSL_PORT></property>
    </outputEventAdaptor>
  5. Add the ppaas-monitoring-service.car file, which is in the <PPAAS_DAS_DISTRIBUTION>/metering-dashboard directory into the <DAS_HOME>/repository/deployment/server/carbonapps directory to generate the monitoring dashboard.

    If the <DAS_HOME>/repository/deployment/server/carbonapps folder does not exist, initially create the folder before moving the CAR file.

  6. Navigate to monitoring dashboard from the PPaaS Console using the Monitoring menu.

    The following is a sample monitoring dashboard:
  7. Once you have carriedout all the configurations, start the DAS server. After the DAS server has started successfully start the PPaaS server.

After you have successfully configured DAS in a separate host, start the DAS server:

./wso2server.sh

 

Step 5 - Setup PPaaS

When using a VM setup or Kubernetes, you need to configure PPaaS accurately before attempting to deploy a WSO2 product on the PaaS.

 Click here to instructions...

Follow the instructions below to configure PPaaS:

Some steps are marked as optional as they are not applicable to all IaaS.
Therefore, only execute the instructions that correspond to the IaaS being used!

Step 1 - Install Prerequisites

Ensure that the following prerequisites have been met based on your environment and IaaS.

  1. Install the prerequisites listed below.

    • Oracle Java SE Development Kit (JDK)

    • Apache ActiveMQ

    For more information on the prerequisites, see Prerequisites.

  2. Download the Private PaaS binary distribution from the PPaaS product page and unzip it.

 

Step 2 - Setup a Kubernetes Cluster (Optional)

This step is only mandatory if you are using Kubernetes.

You can setup a Kubernetes cluster using one of the following approaches:

 Click here to instructions...

Prerequisites 

Before starting, download and install the following prerequisites:

Follow the instructions below to setup Kubernetes with Vagrant:

  1. Clone the following Vagrant Git repository. This folder is referred to as <VAGRANT_KUBERNETES_SETUP>.

    git clone https://github.com/imesh/kubernetes-vagrant-setup.git
  2. Disable DHCP server in VirtualBox:

    VBoxManage dhcpserver remove --netname HostInterfaceNetworking-vboxnet0
  3. Start a new Kubernetes cluster using the following command, which will start one master node and one minion:

    run.sh

     

    1. If more than one minion is needed, run the following command with the required number of instances. The number of instances you require is defined by n.

      run.sh NUM_INSTANCES=2
    2. If you need to specify the minion's memory and CPU, use the following command:
      Example: 

      run.sh NUM_INSTANCES=2 NODE_MEM=4096 NODE_CPUS=2
  4. Once the nodes are connected to the cluster and the state of the nodes are changed to Ready, the Kubernetes cluster is ready for use.
    Execute the following Kubernetes CLI commands and verify the cluster status:

    kubectl get nodes
    
    NAME           LABELS                                STATUS
    172.17.8.102   kubernetes.io/hostname=172.17.8.102   Ready

Access the Kubernetes UI using the following URL http://<HOST>:<HTTP_PORT>/ui

Example: http://172.17.8.101:8080/ui

If you get a notification mentioning that the \"kube-ui\" endpoints cannot be found, execute the kube-ui-pod.sh script.

Follow the instructions below to create an elastic Kubernetes cluster with three worker nodes and a master on a Mac Operating System, which is running in EC2:

The Kubernetes cluster will also include the following sections:

 

  1. Install and configure Kubectl.

    Kubectl is a client command line tool provided by the Kubernetes team. It helps monitor and manage Kubernetes Clusters.

    wget https://storage.googleapis.com/kubernetes-release/release/v1.0.1/bin/linux/amd64/kubectl
    chmod +x kubectl
    mv kubectl /usr/local/bin/

    For more information, see installing and configuring Kubectl.

  2. Install and configure the AWS Command Line Interface.

    wget https://bootstrap.pypa.io/get-pip.py
    sudo python get-pip.py
    sudo pip install awscli

    If you encounter an issue, use the following command to resolve it:

    sudo pip uninstall six
    sudo pip install --upgrade python-heatclient

    For more information see, AWS command line interface.

  3. Create the Kubernetes Security Group.

    aws ec2 create-security-group --group-name kubernetes --description "Kubernetes Security Group"
    aws ec2 authorize-security-group-ingress --group-name kubernetes --protocol tcp --port 22 --cidr 0.0.0.0/0
    aws ec2 authorize-security-group-ingress --group-name kubernetes --protocol tcp --port 80 --cidr 0.0.0.0/0
    aws ec2 authorize-security-group-ingress --group-name kubernetes --protocol tcp -p 30000-32767 --cidr 0.0.0.0/0
    aws ec2 authorize-security-group-ingress --group-name kubernetes --source-security-group-name kubernetes

    The port 8080 is not fixed. It will change based on the KUBERNETES_MASTER_PORT value you define in the Kubernetes Cluster resource definition.

    You can configure the KUBERNETES_MASTER_PORT by defining it under the Kubernetes Master property parameter.

    Example:

    {
      "name": "KUBERNETES_MASTER_PORT",
      "value": "8080"
    }
  4. Configure and save the master cloud-configs file. For more information, see the configuration details for master.yaml .
  5. Configure and save the node cloud-configs. For more information, see the configuration details for node.yaml .
  6. Launch the master.

    Replace the <ami_image_id> with a suitable version of the CoreOS image for AWS. It is recommend to use the following CoreOS alpha channel AMI Image ID: ami-f7a5fec7

    1. Run the instance.

      aws ec2 run-instances --image-id <ami_image_id> --key-name <keypair> \
      --region us-west-2 --security-groups kubernetes --instance-type m3.medium \
      --user-data file://master.yaml
    2. Record the InstanceId of the master.
    3. Gather the public and private IP ranges of the master node:

      aws ec2 describe-instances --instance-id <instance-id>

      The output:

      "Reservations": [
        {
          "Instances": [
            {
              "PublicDnsName": "ec2-54-68-97-117.us-west-2.compute.amazonaws.com",
              "RootDeviceType": "ebs",
              "State": {
                "Code": 16,
                "Name": "running"
              },
              "PublicIpAddress": "54.68.97.117",
              "PrivateIpAddress": "172.31.9.9",
              }
  7. Update the node.yaml cloud-config file.

    Replace all instances of the <master-private-ip> in the node.yaml file with the private IP address of the master node.

  8. Launch the three worker nodes.

    Replace the <ami_image_id> with a suitable version of the CoreOS image, for AWS. It is recommend to use the same AMI image ID used by the master.

    aws ec2 run-instances --count 3 --image-id <ami_image_id> --key-name <keypair> \
    --region us-west-2 --security-groups kubernetes --instance-type m3.medium \
    --user-data file://node.yaml
  9. Configure the Kubectl SSH tunnel.

    This command enables a secure communication between the Kubectl client and the Kubernetes API.

    ssh -i key-file -f -nNT -L 8080:127.0.0.1:8080 core@<master-public-ip>
  10. List the worker nodes.

    Once the worker instances are fully booted, the kube-register service running on the master node will automatically register the Kubernetes API server. This process will take several minutes.

    kubectl get nodes

Step 3 - Setup Puppet Master (Optional)

This step is only mandatory if you are deploying PPaaS on a Virtual Machine (e.g., EC2, OpenStack, GCE).

Puppet is an open source configuration management utility. In Private PaaS, Puppet has been used as the orchestration layer. Private PaaS does not have any templates, configurations in puppet, it consists only of the product distributions. Puppet acts as a file server while the Configurator does the configuration in runtime.

Follow the instructions below to setup the Puppet Master.

Step 1 - Configure Puppet Master

 Click here to instructions...

Follow steps given below to install Puppet Master on Ubuntu:

  1.  Download the Puppet Master distribution package for the Ubuntu release.

    wget https://apt.puppetlabs.com/puppetlabs-release-<CODE_NAME>.deb
     
    # For example for Ubuntu 14.04 Trusty:
    wget https://apt.puppetlabs.com/puppetlabs-release-trusty.deb
  2. Install the downloaded distribution package.

    sudo dpkg -i puppetlabs-release-<CODE_NAME>.deb  
  3. Install Puppet Master.

    sudo apt-get update
    sudo apt-get install puppetmaster
  4. Install Passenger with Apache.

    For more information, see Install Apache and Passenger.

  5. Change the Ubuntu hostname. Please follow the steps given below to change the Ubuntu hostname:
    1. Update the /etc/hosts file.

      sudo echo "127.0.0.1 puppet.test.org" >> /etc/hosts
    2. Change the value of the hostname.

      sudo hostname puppet.test.org
  6. Add the following entry to the /etc/puppet/autosign.conf file:

     

    *.test.org
  7.  Add the server=puppet.test.org line to the puppet.conf file, which is in the /etc/puppet directory.

    [main]
    server=puppet.test.org
    logdir=/var/log/puppet
    vardir=/var/lib/puppet
    ssldir=/var/lib/puppet/ssl
    rundir=/var/run/puppet
    factpath=$vardir/lib/facter
    templatedir=$confdir/templates
    dns_alt_names=puppet
    
    [master]
    # These are needed when the puppetmaster is run by passenger
    # and can safely be removed if webrick is used.
    ssl_client_header = SSL_CLIENT_S_DN
    ssl_client_verify_header = SSL_CLIENT_VERIFY
  8. Restart the Puppet Master.

    /etc/init.d/puppetmaster restart
  9. Download the VM Tools by navigating to the following path via the PPaaS product page.

    Cartridges > common > wso2ppaas-vm-tools-4.1.1

  10. Copy and replace the content in the Puppet Master's /etc/puppet folder with the content in the <VM_TOOLS>/Puppet directory.

  11. Configure the mandatory modules.

Mandatory modules

It is mandatory to configure the following modules when configuring Puppet Master for PPaaS:


Python Cartridge Agent Module
  1. Download the Cartridge Agent via the PPaaS product page.

  2. Copy the downloaded apache-stratos-python-cartridge-agent-4.1.4.zip  to the /etc/puppet/modules/python_agent/files directory.

  3. Change the file permission value, of the apache-stratos-python-cartridge-agent-4.1.4.zip file, to 0755.

    chmod 755 apache-stratos-python-cartridge-agent-4.1.4.zip
  4. Update the base.pp file in the /etc/puppet/manifests/nodes directory, with the following Python agent variables.

      $pca_name             = 'apache-stratos-python-cartridge-agent'
      $pca_version          = '4.1.4'
      $mb_ip                = 'MB-IP'
      $mb_port              = 'MB-PORT'
      $mb_type              = 'activemq' #in wso2mb case, value should be 'wso2mb'
      $cep_ip               = "CEP-IP"
      $cep_port             = "7711"
      $cep_username         = "admin"
      $cep_password         = "admin"
      $bam_ip               = '192.168.30.96'
      $bam_port             = '7611'
      $bam_secure_port      = '7711'
      $bam_username         = 'admin'
      $bam_password         = 'admin'
      $metadata_service_url = 'METADATA-SERVICE-URL'
      $agent_log_level      = 'INFO'
      $enable_log_publisher = 'false'

    Optionally you can configure the MB_IP, MB_PORT, PUPPET_IP and the PUPPET_HOSTNAME in the network partition as shown below.

    It must be noted that the values defined in the network partition receives higher priority over the values declared in the base.pp file ( i.e., The values declared in the base.pp file are overwritten by the values declared in the network partition.).

    {
        "id": "network-partition-openstack",
        "provider": "openstack",
        "partitions": [
            {
                "id": "partition-1",
                "property": [
                    {
                        "name": "region",
                        "value": "<REGION>"
                    }
                ]
            },
            {
                "id": "partition-2",
                "property": [
                    {
                        "name": "region",
                        "value": "<REGION>"
                    }
                ]
            }
        ],
        "properties": [
            {
                "name": "payload_parameter.PUPPET_IP",
                "value": "<PUPPET_MASTER_IP>"
            },
            {
                "name": "payload_parameter.MB_IP",
                "value": "<MESSAGE_BROKER_IP>"
            },
            {
                "name": "payload_parameter.MB_PORT",
                "value": "<MESSAGE_BROKER_PORT>"
            },
            {
                "name": "payload_parameter.PUPPET_HOSTNAME",
                "value": "<PUPPET_MASTER_HOSTNAME>"
            }
        ]
    }
Java Module
  1. Copy the downloaded jdk-7u72-linux-x64.tar.gz file to the files folder, which is in the /etc/puppet/modules/java directory. 

    You can download jdk-7u72-linux-x64.tar.gz from here.

  2. Change file permission value, of the jdk-7u72-linux-x64.tar.gz file, to 0755.

    chmod 755 jdk-7u72-linux-x64.tar.gz
  3. Update the base.pp file, which is in the /etc/puppet/manifests/nodes directory, with the following Java variables.

    $java_distribution = 'jdk-7u72-linux-x64.tar.gz' 
    $java_folder = 'jdk1.7.0_72' 
Configurator Module
  1. Download the Configurator by navigating to the following path via the PPaaS product page.

    Cartridges > common > wso2ppaas-configurator-4.1.1

  2. Copy the Configurator (ppaas-configurator-4.1.1.zip) to the /etc/puppet/modules/configurator/files directory.
  3. Change the file permission value, of the ppaas-configurator-4.1.1.zip file, to 0755.

    chmod 755 ppaas-configurator-4.1.1.zip
  4.  Update the base.pp file, which is in the /etc/puppet/manifests/nodes directory, with the following configurator variables.

    $configurator_name = 'ppaas-configurator' 
    $configurator_version = '4.1.1'


 

 

Step 2 - Update the cartridge-config.properties file

Update the values of the following parameters in the cartridge-config.properties file, which is in the <PRIVATE_PAAS_HOME>/repository/conf directory.

The values are as follows:

  • [PUPPET_IP] - The IP address of the running Puppet instance.

  • [PUPPET_HOST_NAME] - The host name of the running Puppet instance.

 

Step 4 - Create a cartridge base image (Optional)

This step is only mandatory if you are deploying PPaaS on a Virtual Machine (e.g., EC2, OpenStack, GCE).

 Click here to instructions...

Follow the instructions below to create a cartridge on the EC2 IaaS:

Step 1 - Log in to your EC2 account

To follow this guide, you need an EC2 account. If you do not have an account, create an AWS account. For more information, see Sign Up for Amazon EC2. This account must be authorized to manage EC2 instances (including starting and stopping instances, and creating security groups and key pairs).

Step 2 - Create a security group

Before launching the instance, you need to create the right security group. This security group defines firewall rules for your instances, which are a list of ports that are used as part of the default PPaaS deployment. These rules specify which incoming network traffic is delivered to your instance. All other traffic is ignored. The ports that should be defined are listed in as  default ports.

Follow the instructions below to create the security group and configure it:

  1. On the Network and Security menu, click Security Groups.
  2. Click Create Security Group.
  3. Enter the name and description of the security group.
  4. Click Yes, Create.
  5. Click Inbound.
  6. Select Custom TCP rule.

  7. Enter the port or port range.
    There are two kinds of ports listed in the default ports, which are namely open for outside access and restricted internal access. You will have to ideally enter each of the ports as separate rules.
  8. Click Add Rule and then click Apply Rule Changes

    Always apply rule changes, as your rule will not get saved unless the rule changes are applied.
    Repeat steps 6 to 8 to add all the ports mentioned, as each port or port range has to be added as a separate rule.

    Write down the names of your security groups if you wish to enter your user data in the wizard.

Step 3 - Create a key pair

Save your private key in a safe place on your computer. Note down the location, because you will need the key pair to connect to your instance.

Follow the instructions below to create a key pair, download it and secure it:

  1. On the Network and Security menu, click Key Pairs.
  2. Click Create New Key Pair.
  3. Enter a name for your key pair.
  4. Click Create. After the key pair automatically downloads, click Close.
  5. Protect your key pair by executing the following command in your terminal.
    By default, your PEM file will be unprotected.  Use the following command to secure your PEM file, so that others will not have access to it: 

    chmod 0600 <path-to-the-private-key>

Step 4 - Spawn an instance on EC2

Follow the instructions below to spawn an instance on EC2:

  1. Sign in to the Amazon Web Services (AWS) Management Console and open the Amazon EC2 console at  https://console.aws.amazon.com/ec2/.  
  2. Click EC2 on the home console.
  3. Select the Region for the instance from the region drop-down list.
  4. Click Launch Instance.

  5. Select Quick Launch Wizard.

  6. Name your instance, for example PPaaSCartridgeInstance.

  7. Select the key pair that you created.
  8. Select More Amazon Machine Images and click on Continue.

  9. On the next page, specify the image.
  10. Click Continue.
  11. Click Edit Details.
  12. Edit the image size. 
    1. Select the Instance Details option.
    2. Change the image type as required.
  13. Select a security group.
    1. Select the Security Settings option.
    2. Click Select Existing Security Groups.
    3. Select the PPaaS security group you created previously.
  14. Click Launch to start the EC2 instance.

  15. Click Close.
    This will redirect you to the instance page. It takes a short time for an instance to launch. The instance's status appears as pending while it is launching. After the instance is launched, its status changes to running.

Step 5 - Configure the cartridge base image

Follow the steps given below to configure a base Image:

  1. Start up a virtual machine (VM) instance using a preferred OS, on a preferred IaaS.

  2. Install the Puppet agent.

     Install the Puppet agent on Ubuntu

    If you are using Ubuntu 12, you will require the following Puppet repository to install the Puppet agent.

    wget http://apt.puppetlabs.com/puppetlabs-release-precise.deb
    dpkg -i puppetlabs-release-precise.deb
    sudo apt-get update
    sudo apt-get install puppet

    If you are using Ubuntu 14, you will require the following Puppet repository to install the Puppet agent.

    wget http://apt.puppetlabs.com/puppetlabs-release-trusty.deb
    dpkg -i puppetlabs-release-trusty.deb
    sudo apt-get update
    sudo apt-get install puppet
     Install the Puppet agent on CentOS
    1. Enable dependencies and Puppet labs repository on Master.
       

      # rpm -ivh http://yum.puppetlabs.com/puppetlabs-release-el-7.noarch.rpm
      # rpm -ivh http://yum.puppetlabs.com/puppetlabs-release-el-6.noarch.rpm
      # rpm -ivh http://yum.puppetlabs.com/puppetlabs-release-el-5.noarch.rpm
    2. Install and upgrade Puppet on the agent node.

      # yum install puppet
      # puppet resource package puppet ensure=latest
      # /etc/init.d/puppet restart

    For more information on installing the Puppet agent on CentOS, see installing Puppet on CentOS.

  3. Open the puppet file, which is in the <PUPPET_AGENT>/etc/default directory and configure it as follows:

    START=yes
  4. Add the following to the puppet.conf file, which is in the <PUPPET_AGENT> /etc/puppet directory:

    [main]
    server=puppet.ppaas.org

    If you are unsure of the server name, use a dummy hostname. Private PaaS will update the above with the respective server name, when it starts running.

  5. Stop the puppet instance or instances that are running.

    cd /etc/init.d/puppet
    stop
    • When the Puppet agent is installed as mentioned in step 1, there is a high tendency that a puppet instance will start running. Therefore before creating the base image you need to stop any puppet instances that are running.
    • Execute the following command to identify the running puppet instances:

      ps -ef | grep puppet

      The following output will be given, if any Puppet instances are running.

      Example:

      root      1321     1  0 Sep09 ?        00:00:17 /usr/bin/ruby /usr/bin/puppet agent
      root     12149 12138  0 05:44 pts/0    00:00:00 grep --color=auto puppet
  6. Copy the init.sh script into the <PUPPET_AGENT>/root/bin directory.

    You can find the init.sh script for the respective IaaS here.

    The init.sh file differs based on the IaaS. If you wish to find the init.sh script for a different IaaS, go to init-scripts. You can find the respective init.sh script by navigating to the init-script/<IAAS>/<OS> path.

  7. Update the /etc/rc.local file.

    /root/bin/init.sh > /tmp/puppet_log
    exit 0
  8. Execute the following commands:

    rm -rf /var/lib/puppet/ssl/*
    rm -rf /tmp/*

    By executing the above commands you will be cleaning up the base image, for Private PaaS to install the required certificates and payloads. This is done to avoid any errors that will be given, if Private PaaS starts installing a certificate or payload that already exists in the base image.

Step 4 - Create a snapshot of the instance

Follow the instructions below to create a snapshot of the instance on EC2:

  1. Open the Amazon EC2 console at https://console.aws.amazon.com/ec2/.

  2. Make sure the appropriate Region is selected in the region selector of the navigation bar.

  3. Click Instances in the navigation pane.

  4. On the Instances page, right-click your running instance and select Create Image.
  5. Fill in a unique image name and an optional description of the image (up to 255 characters), and click Create Image.

    In Amazon EC2 instance store-backed AMIs, the image name replaces the manifest name (such as s3_bucket/something_of_your_choice.manifest.xml), which uniquely identifies each Amazon Amazon EC2 instance store-backed AMI.

    Amazon EC2 powers down the instance, takes images of any volumes that were attached, creates and registers the AMI, and then reboots the instance.

  6. Go to the AMIs page and view the AMI's status. While the new AMI is being created, its status is pending.

    It takes a few minutes for the whole process to finish.

  7. Once your new AMI's status is available, go to the Snapshots page and get the Snapshot ID of the new snapshot that was created for the new AMI that will be used in the Sample Cartridge Definition JSON file. Any instance you launch from the new AMI uses this snapshot for its root device volume. 

After you have finished creating the cartridge base image, make a note of the image ID as you will need this later when creating a cartridge.

 Click here to instructions...

The following sub-sections describe the steps involved in creating a cartridge base image on the OpenStack IaaS:

Step 1 - Spawn an instance 

Follow the instructions below to spawn a configured instance of Debian/Ubuntu based Linux 12.04.1 LTS distributions on OpenStack:

  1. Log in to the OpenStack management console.
  2. Click Access & Security on the menu in the left side and click Create Security Group.
  3. In the Add Rule window, enter the configurations of the rules for the security group as required and click Add. For more information on the ports that should be defined, see Required Ports.

  4. In the Create an Image window, enter the configurations for the image as required and click Create Image.
  5. In the Create Key Pair window, enter the configurations for the key pair as required and click Create Key Pair. When the message is prompted, download the key pair and keep it saved in a preferred location.
  6. Protect your key pair by executing the following command in your terminal.
    By default, your PEM file will be unprotected.  Use the following command to secure your PEM file so that others will not have access to it: 

    chmod 0600 <path to the private key>
  7. In the Details section of the Launch Instance window, enter the configurations for the instance as required.
  8. In the Access & Security section enter the configurations for the instance as required and click Create.
  9. Select the created instance in the Instances window and click Launch instance.

Step 2 - Configure the cartridge base image

Follow the steps given below to configure a base Image:

  1. Start up a virtual machine (VM) instance using a preferred OS, on a preferred IaaS.

  2. Install the Puppet agent.

     Install the Puppet agent on Ubuntu

    If you are using Ubuntu 12, you will require the following Puppet repository to install the Puppet agent.

    wget http://apt.puppetlabs.com/puppetlabs-release-precise.deb
    dpkg -i puppetlabs-release-precise.deb
    sudo apt-get update
    sudo apt-get install puppet

    If you are using Ubuntu 14, you will require the following Puppet repository to install the Puppet agent.

    wget http://apt.puppetlabs.com/puppetlabs-release-trusty.deb
    dpkg -i puppetlabs-release-trusty.deb
    sudo apt-get update
    sudo apt-get install puppet
     Install the Puppet agent on CentOS
    1. Enable dependencies and Puppet labs repository on Master.
       

      # rpm -ivh http://yum.puppetlabs.com/puppetlabs-release-el-7.noarch.rpm
      # rpm -ivh http://yum.puppetlabs.com/puppetlabs-release-el-6.noarch.rpm
      # rpm -ivh http://yum.puppetlabs.com/puppetlabs-release-el-5.noarch.rpm
    2. Install and upgrade Puppet on the agent node.

      # yum install puppet
      # puppet resource package puppet ensure=latest
      # /etc/init.d/puppet restart

    For more information on installing the Puppet agent on CentOS, see installing Puppet on CentOS.

  3. Open the puppet file, which is in the <PUPPET_AGENT>/etc/default directory and configure it as follows:

    START=yes
  4. Add the following to the puppet.conf file, which is in the <PUPPET_AGENT> /etc/puppet directory:

    [main]
    server=puppet.ppaas.org

    If you are unsure of the server name, use a dummy hostname. Private PaaS will update the above with the respective server name, when it starts running.

  5. Stop the puppet instance or instances that are running.

    cd /etc/init.d/puppet
    stop
    • When the Puppet agent is installed as mentioned in step 1, there is a high tendency that a puppet instance will start running. Therefore before creating the base image you need to stop any puppet instances that are running.
    • Execute the following command to identify the running puppet instances:

      ps -ef | grep puppet

      The following output will be given, if any Puppet instances are running.

      Example:

      root      1321     1  0 Sep09 ?        00:00:17 /usr/bin/ruby /usr/bin/puppet agent
      root     12149 12138  0 05:44 pts/0    00:00:00 grep --color=auto puppet
  6. Copy the init.sh script into the <PUPPET_AGENT>/root/bin directory.

    You can find the init.sh script for the respective IaaS here.

    The init.sh file differs based on the IaaS. If you wish to find the init.sh script for a different IaaS, go to init-scripts. You can find the respective init.sh script by navigating to the init-script/<IAAS>/<OS> path.

  7. Update the /etc/rc.local file.

    /root/bin/init.sh > /tmp/puppet_log
    exit 0
  8. Execute the following commands:

    rm -rf /var/lib/puppet/ssl/*
    rm -rf /tmp/*

    By executing the above commands you will be cleaning up the base image, for Private PaaS to install the required certificates and payloads. This is done to avoid any errors that will be given, if Private PaaS starts installing a certificate or payload that already exists in the base image.

Step 3 - Create a snapshot of the instance

Follow the instructions below to create a snapshot of the instance on OpenStack:

  1. Log in to the OpenStack management console.
  2. Navigate to Instances on the menu on the left side. 
  3. Select the respective instance and click Create Snapshot.
  4. Enter a name for the image and click Create Snapshot.
  5. Navigate to Images on the menu that is on the left side and get the Image ID. You need to define the Image ID in the Sample Cartridge Definition JSON file. 

 

After you have finished creating the cartridge, make a note of the image ID you created for the cartridge, as you will need this when you use Stratos Manager to add a cartridge.

 Click here to instructions...

The following sub-sections describe the steps involved in creating a cartridge base image on the GCE IaaS:

Step 1 - Spawn an instance

  1. Navigate to the Google Developers Console.
  2. Launch an instance with your preferred OS and other related settings as follows:
    1. On the Compute menu, click Compute Engine and then click VM instances.
    2. Click Create instance

      The create a new instance interface appears.
    3. After entering the required instance details, click Save to create the instance.
  3. SSH to the spawned instance and make relevant changes to the base image (e.g., If you need a PHP cartridge, install PHP related libraries).

    1. On the Compute menu, click Compute Engine and then click VM Instances.
    2. Click the more option in the Connect column.
    3. Click Open in browser window.

     

Step 2 - Configure the cartridge base image

Follow the steps given below to configure a base Image:

  1. Start up a virtual machine (VM) instance using a preferred OS, on a preferred IaaS.

  2. Install the Puppet agent.

     Install the Puppet agent on Ubuntu

    If you are using Ubuntu 12, you will require the following Puppet repository to install the Puppet agent.

    wget http://apt.puppetlabs.com/puppetlabs-release-precise.deb
    dpkg -i puppetlabs-release-precise.deb
    sudo apt-get update
    sudo apt-get install puppet

    If you are using Ubuntu 14, you will require the following Puppet repository to install the Puppet agent.

    wget http://apt.puppetlabs.com/puppetlabs-release-trusty.deb
    dpkg -i puppetlabs-release-trusty.deb
    sudo apt-get update
    sudo apt-get install puppet
     Install the Puppet agent on CentOS
    1. Enable dependencies and Puppet labs repository on Master.
       

      # rpm -ivh http://yum.puppetlabs.com/puppetlabs-release-el-7.noarch.rpm
      # rpm -ivh http://yum.puppetlabs.com/puppetlabs-release-el-6.noarch.rpm
      # rpm -ivh http://yum.puppetlabs.com/puppetlabs-release-el-5.noarch.rpm
    2. Install and upgrade Puppet on the agent node.

      # yum install puppet
      # puppet resource package puppet ensure=latest
      # /etc/init.d/puppet restart

    For more information on installing the Puppet agent on CentOS, see installing Puppet on CentOS.

  3. Open the puppet file, which is in the <PUPPET_AGENT>/etc/default directory and configure it as follows:

    START=yes
  4. Add the following to the puppet.conf file, which is in the <PUPPET_AGENT> /etc/puppet directory:

    [main]
    server=puppet.ppaas.org

    If you are unsure of the server name, use a dummy hostname. Private PaaS will update the above with the respective server name, when it starts running.

  5. Stop the puppet instance or instances that are running.

    cd /etc/init.d/puppet
    stop
    • When the Puppet agent is installed as mentioned in step 1, there is a high tendency that a puppet instance will start running. Therefore before creating the base image you need to stop any puppet instances that are running.
    • Execute the following command to identify the running puppet instances:

      ps -ef | grep puppet

      The following output will be given, if any Puppet instances are running.

      Example:

      root      1321     1  0 Sep09 ?        00:00:17 /usr/bin/ruby /usr/bin/puppet agent
      root     12149 12138  0 05:44 pts/0    00:00:00 grep --color=auto puppet
  6. Copy the init.sh script into the <PUPPET_AGENT>/root/bin directory.

    You can find the init.sh script for the respective IaaS here.

    The init.sh file differs based on the IaaS. If you wish to find the init.sh script for a different IaaS, go to init-scripts. You can find the respective init.sh script by navigating to the init-script/<IAAS>/<OS> path.

  7. Update the /etc/rc.local file.

    /root/bin/init.sh > /tmp/puppet_log
    exit 0
  8. Execute the following commands:

    rm -rf /var/lib/puppet/ssl/*
    rm -rf /tmp/*

    By executing the above commands you will be cleaning up the base image, for Private PaaS to install the required certificates and payloads. This is done to avoid any errors that will be given, if Private PaaS starts installing a certificate or payload that already exists in the base image.

Step 3 - Create a snapshot of the instance

  1. Set the auto-delete state of the root persistent disk to  false as follows:
    This is done to avoid the persistent disk from being automatically deleted when you terminate the instance.

    1. On the Compute menu, click Compute Engine and then click VM Instances.
    2. Click on the name of the instance.

    3. Edit the settings related to the instance. 

    4. Uncheck the Delete boot disk when instance is deleted option. This is done to ensure that  all the data is not deleted  when you terminate the instance.

    5. Click Save.
      If you wish to view details on the disk related to the instance, click Compute Engine and then click Disks.  

  2. Delete the instance.
    Initially, you need to terminate the spawned instance using the root persistent disk to be able to create an image. When you are terminating the instance make sure that the persistent disk is not attached to any other virtual machines.

    1. On the Compute menu, click Compute Engine and then click VM Instances.
    2. Check the instance that you need to delete.
    3. Click Delete to delete the instance.
  3. Create a new image as follows:

    1. On the Compute menu, click Compute Engine and then click Images.
    2. Click Create Image.
    3. Provide the Source type as Disk and select the relevant disk name from the dropdown menu. You need to do this to create the image based on the persistent disk.
    4. Click Create

      The newly created image is immediately available under the Images section.
       

 

Step 5 - Disable the mock IaaS

Mock IaaS is enabled by default. Therefore, if you are running PPaaS on another IaaS, you need to disable the Mock IaaS.

Follow the instructions below to disable the Mock IaaS:

  1. Navigate to the <PRIVATE_PAAS_HOME>/repository/conf/mock-iaas.xml file and disable the Mock IaaS.

    <mock-iaas enabled="false">
  2. Navigate to the <PRIVATE_PAAS_HOME>/repository/deployment/server/webapps directory and delete the mock-iaas.war file. 

    When Private PaaS is run the mock-iaas.war is extracted and the mock-iaas folder is created. Therefore, if you have run PPaaS previously, delete the  mock-iaas folder as well.

 

Step 6 - Carryout additional IaaS configurations (Optional)

This step is only applicable if you are using GCE.

When working on GCE carryout the following instructions:

  1. Create a service group.
  2. Add a firewall rule.

Step 7 - Configure the Cloud Controller (Optional)

This step is only mandatory if you are deploying PPaaS on a Virtual Machine (e.g., EC2, OpenStack, GCE).

 Click here to instructions...

Follow the instructions given below to configure the Cloud Controller (CC):

  1. Configure the IaaS provider details based on the IaaS.
    You need to configure details in the  <PRIVATE_PAAS_HOME>/repository/conf/cloud-controller.xml file and comment out the IaaS provider details that are not being used.  

    Update the identitycredentialowner-idavailabilityZonesecurityGroupskeyPair name of the Amazon EC2 IaaS provider.

    <cloudController>
    ...
    		<iaasProvider type="ec2" name="Amazon EC2">
                <className>org.apache.stratos.cloud.controller.iaases.ec2.EC2Iaas</className>
                <provider>aws-ec2</provider>
                <identity svns:secretAlias="cloud.controller.ec2.identity">identity</identity>
                <credential svns:secretAlias="cloud.controller.ec2.credential">credential</credential>
                <property name="jclouds.ec2.ami-query" value="owner-id=owner-id;state=available;image-type=machine"/>
                <property name="availabilityZone" value="ap-southeast-1b"/>
                <property name="securityGroups" value="security-group"/>
                <property name="autoAssignIp" value="true" />
                <property name="keyPair" value="keypair-name"/>
            </iaasProvider>
    ...
    </cloudController>

    Update the following properties related to OpenStack: identitycredential, Openstack API URL ( jclouds.endpoint ).

    <cloudController>
       ...
       <iaasProvider type="openstack" name="Openstack">
          <className>org.apache.stratos.cloud.controller.iaases.openstack.OpenstackIaas</className>
          <provider>openstack-nova</provider>
          <identity svns:secretAlias="cloud.controller.openstack.identity">demo:demo</identity>
          <credential svns:secretAlias="cloud.controller.openstack.credential">openstack</credential>
          <property name="jclouds.endpoint" value="http://192.168.16.20:5000/" />
          <property name="jclouds.openstack-nova.auto-create-floating-ips" value="false"/>
          <property name="jclouds.api-version" value="2.0/" />
          <property name="openstack.networking.provider" value="nova" />
          <property name="X" value="x" />
          <property name="Y" value="y" />
       </iaasProvider>
       ...
    </cloudController>
    • The OPENSTACK_IDENTITY must be given in the following format: OPENSTACK_PROJECT_NAME:USERNAME
      Example: demo:kim
    • The password of the given user USERNAME will serve as the OPENSTACK_CREDENTIAL.
      Example: The password of user Kim is kim123$

    Add the following code snippet into the cloud-controller.xml file and update the following properties related to GCE: cloud.controller.gce.identity, cloud.controller.gce.credential and projectName

    <iaasProvider type="gce" name="Google Compute Engine">
                <className>org.apache.stratos.cloud.controller.iaases.gce.GCEIaas</className>
                <provider>google-compute-engine</provider>
                <identity svns:secretAlias="cloud.controller.gce.identity">34324435543-p41nln3fdsfdsfdsfds7n4jn3m421girtffpeep@developer.gserviceaccount.com</identity>
                <credential svns:secretAlias="cloud.controller.gce.credential">
    -----BEGIN PRIVATE KEY-----
    xxxxxxxxxxxxxxxxx
    -----END PRIVATE KEY-----
    </credential>            
                <property name="autoAssignIp" value="true" />
                <property name="projectName" value="ultra-component-108507"/>
            </iaasProvider>

    The following are the definitions of some of the parameters in the above code snippet: 

    • cloud.controller.gce.identity = Client email. You can find the client email in the JSON that gets downloaded when creating a GCE service account.

    • cloud.controller.gce.credential = Private key. You need to create a service account to obtain the private key. For more information, see Create a GCE service account.

      Remove the new line (\n) character from the private key and add the private key within the starting (-----BEGIN PRIVATE KEY-----) and ending placeholders (-----END PRIVATE KEY-----).

    • projectName = Project ID. This refers to the ID of the project, which you used to create the service key in order to obtain the private key.

  2. Update the values of the MB_IP and MB_PORT in the jndi.properties file, which is in the <PRIVATE_PAAS_HOME>/repository/conf directory. 

    The default value of the message-broker-port= 61616.

    The values are as follows:

    • MB_IP: The IP address used by ActiveMQ.

    • MB_PORT: The port used by ActiveMQ.
    connectionfactoryName=TopicConnectionFactory
    java.naming.provider.url=tcp://[MB_IP]:[MB_Port]
    java.naming.factory.initial=org.apache.activemq.jndi.ActiveMQInitialContextFactory

 

Step 8 - Define the Message Broker IP (Optional)

This step is only mandatory if you have setup the Message Broker (MB), in this case ActiveMQ, in a separate host.

If you have setup ActiveMQ, which is the PPaaS Message Broker, in a separate host you need to define the Message Broker IP, so that the MB can communicate with PPaaS.

 Click here to instructions...

Update the value of the  MB_IP  in the  JMSOutputAdaptor  file, which is in the  <PRIVATE_PAAS_HOME>/repository/deployment/server/outputeventadaptors  directory.

 

[MB_IP]: The IP address used by ActiveMQ.
<property name="java.naming.provider.url">tcp://[MB_IP]:61616</property>

 

Step 6 - Start the PPaaS server

The way in which you need to start the PPaaS server varies based on your settings as follows:

We recommend to start the PPaaS server in background mode, so that the instance will not

  • If you want to use the internal database (H2) and the embedded CEP, start the PPaaS server as follows:

    sh <PRIVATE_PAAS_HOME>/bin/wso2server.sh start
  • If you want to use an external database, start the Private PaaS server with the -Dsetup option as follows: 
    This creates the database schemas in <PRIVATE_PAAS_HOME>/dbscripts directory.

    sh <PRIVATE_PAAS_HOME>/bin/wso2server.sh start -Dsetup
  • If you want to use an external CEP, disable the embedded CEP when starting the PPaaS server as follows:

    sh <PRIVATE_PAAS_HOME>/bin/wso2server.sh start -Dprofile=cep-excluded
  • If you want to use an external database, together with an external CEP, start the Private PaaS server as follows:
    This creates the database schemas in <PRIVATE_PAAS_HOME>/dbscripts directory.

    sh <PRIVATE_PAAS_HOME>/bin/wso2server.sh start -Dsetup -Dprofile=cep-excluded 

     

You can tail the log, to verify that the Private PaaS server starts without any issues.

tail -f <PRIVATE_PAAS_HOME>/repository/logs/wso2carbon.log

What's next?

After starting PPaaS on a preferred IaaS, configure the WSO2 cartridge, so that you can seamlessly deploy the WSO2 product on PPaaS.

 

 

  • No labels