Thursday, December 05, 2013

Unbricking DD/OpenWRT routers

The recent news about a new Linux worm that attacks routers made me download and flash the latest version of dd-wrt with the hope that it will have newer versions of the binaries and thus ensure better protection.

Unfortunately after update and following restart the router was rendered useless. Obviously the downloaded firmware was damaged. I had a brick, while only a moment ago this was my home router - the ageing Linksys WRT160NL.

I had to search some sites on my mobile only to find out that in order to unbrick it I may:
  • do factory reset
  • use JTAG and serial cable (or on new machines USB-to-serial cable)
The factory reset did not work. Although the router happily flashed its LAN, WAN and power lights it did not establish connection with my Windows 7 machine, nor had its wireless SSID broadcast. 

So I started investigating the other alternative - uploading new firmware via the serial communication header in the router. The sites mentioned TFTP and then it hit me. I managed to flash Buffalo router a while ago, just by using the built-in boot-loader and TFTP PUT request. It should be possible to do the same since the router seemed to have its lights functioning and therefore at least part of the boot-loader working.

I asked Google and found out in OpenWRT wiki that this should be fairly easy to do. The wiki commanded:
1. Turn off the power to the router and leave it off until the final step.
2. Make sure your computer has a static IP address from 192.168.1.x (eg. 192.168.1.4)
3. Make sure the ethernet cable is plugged into one of router's LAN ports and the other end into computer's ethernet port
3. cd to the folder where you have the image
4. change the name of the new firmware to code.bin , then type :
5. echo -e "binary\nrexmt 1\ntimeout 60\ntrace\nput code.bin\n" | tftp 192.168.1.1
6. plug the power into the router, it should flash.
Well needless to say this didn't worked - I was on Windows. I had the Microsoft TFTP client, that established connection instantly and never looked back to retry. 

Fortunately I had Cygwin installed as well. So I just had to download and install the tftp package. Without router and therefore an internet connection.

I've found the OpenWRT wiki on my iPad using internet via WiFi tethering. So I enabled the USB tethering this time, and used it to update Cygwin and add proper TFTP client to my Windows system. I also downloaded older version of the firmware.

It took me a matter of minutes to try the steps above and to restore my router's firmware.

I even flashed the (hopefully) latest and greatest version of the dd-wrt firmware for 160NL.
Re-downloaded of course.

Thursday, October 31, 2013

SAP HANA Cloud: Application properties. Multiple connections

The latest update of SAP HANA Cloud Platform (we've changed the name again) includes two small but important changes. Copied from the release notes:
  • Deployment with multiple connections:
"You can use the --connections parameter in the deploy command  to specify the number of connections to be used during deployment. Use the parameter to speed up the deployment of application archives bigger than 5 MB in slow networks."
  • Application properties listing
"You can now list the properties of a deployed application using the new command [...] display-application-properties."
Multiple connections

I already provided measurements on the impact of using more than one connection in slow or traffic/connection shaped networks in one of my previous blogs. You can now try this for yourself by using the --connections parameter during deployment.

For number of connections you can use:
  • --connections 1 means you want to disable the feature
  • maximum number of connections --connections 5

Although some networks allow higher number of connections than 5, this rarely pays off, unless you are the only person left in the office on Friday evening with the sole purpose to utilize the whole bandwidth of the company's leased internet line.

Splitting an archive in small chunks has performance penalty of 5-10% over simply transferring it via the line. We wanted to guarantee the split was really needed so we added the 5 MiB entry barrier for the new feature.

To determine the right number of network connections you can:
  1. Start  with the default settings
  2. Increase the number from 2 up to 5 looking at the times for deployment:
    [Thu Oct 31 10:22:53 FET 2013] Deployment started....
    [Thu Oct 31 10:22:58 FET 2013] Deployment finished successfully 
  3. Use the number of connections that provided the fastest deploy time
Application properties listing

This new feature is small, but quite important. Without this neat feature the operators and developers had to write down what they actually used as deploy parameters. There was no way to obtain the settings from the cloud.

Now we finally provided a way to get this info and be able to blame either yourself or a colleague for using a certain setting:

C:\sdk-1.39.13.6\tools>neo display-application-properties samples\deploy_war\trial.properties

SAP HANA Cloud Console Client

Requesting application properties for:
   application: test
   account    : i024099trial
   SDK version: 1.39.13.6
   user       : i024099

Password for your user:
[Thu Oct 31 10:35:24 FET 2013] Requesting application properties...
[Thu Oct 31 10:35:25 FET 2013] Request for application properties finished successfully

   runtime-version   : 1
   minimum-processes : 1
   maximum-processes : 1
   java-version      : 6

Wednesday, September 18, 2013

SAP HANA Cloud: Operator's Guide

The new Operator's Guide is available in the official documentation. The guide targets operators that usually:
  • don't have access to the application source
  • have to update to the latest version
  • maintain the stability and performance of the application

The new Operator's Guide has the following sections at present:

Console Client Reference


Contains information about the use of the console client and will gather all console commands that the SAP HANA Cloud SDK provides.

The new thing here is that finally we managed to provide a common convention of the exit codes that can be used when using the commands for scripting some operations.

Although the commands can still have their own exit codes they are now inside predefined ranges. This means that you may chose to care for a specific exit code, but generally you can just rely on the range for most of the operations.

For example handling exit codes range 40 to 109 guarantees you that you've covered all parameter validation errors. The operation may fail because the archive you are trying to deploy does not exist but you don't need to care about the exact exit code since you are sure it is in the range above.
Documents different aspects on the application configuration that can help the operators achieve improved stability and performance. The covered topics include (for now) how to:
  • update your runtime version
  • enable GZip response compression
  • configure JVM arguments
  • use Java 6 or Java 7
  • scale horizontally your application
  • assign roles
  • configure destinations

The section describes most of the parameters present in the deploy command and gives recommendations and explanation of all the features. Of course it contains security and connectivity information as well (the last topics).

We'll take care to expand it with everything you can configure so keep an eye for more information. 


This section contains information about features needed to ensure the update of your application is as smooth as possible:
  • Update with Zero Downtime
  • Update with downtime
  • Soft shut-down 

I've already described most of the features in my previous blog but since we have more to share I intend on writing a follow up to complete the picture with the support for custom Maintenance Page.

As you can see we have laid the foundation for the new Operator's Guide, but it's by no means complete. You can just check the release notes or have a look at the content every two weeks for new topics.

Monday, August 12, 2013

SAP HANA Cloud: gzip Compression

An easy way to save bandwidth is to use gzip compression and send less data to the client's browser.

Googling gzip provides some good explanations of the gzip compression. The first two results:
The rule of thumb is to turn on the compression for text based content (scripts, book, JSON or XML) since it would benefit from the heavily reduced bandwidth. Images and audio-visual formats would usually require no additional compression.

Following this recommendations up to now SAP HANA Cloud automatically compressed text based responses (MIME types text/html,text/xml,text/plain). 

The problem is that you could not say you don't want this to happen or to add another MIME type to the list of compressed responses. Setting compression threshold was also not supported.

To lift this limitation we introduced the ability to specify that you:
  • require compression or not
  • what MIME types you want to compress
  • what is the threshold that turns on the compression

We added 3 new parameters to the deploy command and you can specify when deploying your application:
  • --compressible-mime-type - comma separated list of MIME types for which compression will be used.Default: 'text/html, text/xml, text/plain'
  • --compression - enables or disables gzip response compression. Acceptable values: 'on', 'off', 'force' or an integer
  • --compression-min-size - responses bigger than this value get compressed and ones smaller than the value are not compressed. Default: 2048 bytes

To deploy your application with gizp compression of javascript you can issue: 
neo deploy myapp.properties --compression on --compressible-mime-type application/javascript

Behind the scene we are using Tomcat gzip compression described in the Apache Tomcat's Configuration Reference.

Wednesday, July 24, 2013

SAP HANA Cloud: Updating productive applications

So far the update of the productive applications was entirely in the hands of the developers. Not necessarily a bad thing, but this required lots of boilerplate code that every application has to embed.

The new bi-weekly update of the HANA Cloud will introduce small feature that however will enable the customers to update their application with reduced or no-downtime at all. 

On HANA Cloud you can have one or more instances of your application and each of these instances is called an application process. Previously the allowed actions were:
  • start of a single application process
  • stop of all application processes at once

This however implies that you can increase the worker application processes (scale up), but you cannot scale down.
 
What's new is that you can finally stop a specific application process running on the HANA Cloud's compute units by specifying its process ID. If you wonder what the heck do I mean here is some glossary-style explanations:
  • compute unit - look at it as a hardware box, you anyway pay for this to get more CPUs and memory
  • application process - the software that runs on top of the hardware - basically the SAP server that in turn hosts your own application code. 
  • process ID - the unique ID associated with every  application process. Used as application process name in commands (as the term suggests).

So with this minimal change from user perspective now you can achieve:
  • scaling up & down
  • ageing
  • rolling update / zero downtime

Let's see what these three things mean...

Scaling your application
 

HANA Cloud provided the ability to scale your application from the very beginning of its existence. 

As I said customers were allowed to start new processes, but didn't have the ability to stop a single process. Now this is fixed and you can easily include the new application process id parameter in the stop command:
neo stop myapplication.properties --application-process-id <id>
The list of process IDs is displayed after you issue start command, or you can use the status command to have the list printed. In both cases you'll get something similar to the output below:

Application processes
  ID                                           Status
  a182761d75b18b6fe17ed4285089d6447ae4ab3c     STARTED
  385b2cacd896c45dd39c8f444774329869282b80     PENDING
The next step would be to copy the ID and use it in the stop command like this:
neo stop myapplication.properties --application-process-id a182761d75b18b6fe17ed4285089d6447ae4ab3c
The above command will stop the first process, leaving you only one application process to handle the incoming user requests.

Ageing

The ageing is a way to deal with applications that have issues with resource consumption. They either get too slow or consume too much memory. 

This may be due to badly written code, the use of 3rd party library that has leaks or whatever other reason you may think of. You may recognize this approach from home routers or other home appliances that have poorly written firmware, suffer from bad hardware design or most often both :)

In HANA Cloud thanks to the process id you can stop the unhealthy application processes and start new fresh ones to replace them

Rolling update or Zero Downtime

The most interesting application of the new process ID is to update your application. 

In general you can update your application in three ways:
  • without your customers notice anything (zero downtime)
  • before your customers notice anything (rolling update)
  • after your customers notice a warning (maintenance page)

The maintenance page approach includes adding a banner, window or in general something flashy to get the customer attention and inform them that from day/hour 1 to day/hour 2 they will not be able to access the application. This however is quite disruptive since you'll be out of business while updating and your customers have to be informed and to (eagerly?!?) expect this.

In most cases customers are quite unhappy with the notice/maintenance approach so you'll want to do the update with one of the next two approaches. They both require that old versions of your application can work together with new versions of the same code and data. If this is not the case then you either have to stick to the maintenance page or redesign your application.

If both old and new versions of your application can work together you may decide to stop/disable the new functionality until all processes are updated. This may be needed to avoid backward incompatible data reaching the database or being sent via some channel. 

This means that customers may still use the application as they used to, but some will eventually notice the new disabled functions until you roll out the new version.

If there are only minor changes (or your application can cope with the changes) you may decide to simply replace all nodes one by one and have a real zero downtime update.

Should I stop or should I start?

The rolling update and zero downtime approaches require that a new process is started before stopping an old one. This in general helps to keep the ability of your application to process a certain amount of requests. Stopping before starting would effectively scale down your application, so I would recommend start before stop.

Of course using the maintenance page approach will in most cases require you to stop the whole application without using process IDs at all.

Killing Me Softly

Before you can stop an application process you'll want to stop all incoming requests to it. We have in the pipeline the disable command to help you do this.
 
The problem most operators would face is how to understand when to stop the application or the process without affecting user sessions or data. 

To check the active sessions, you need configure JMX checks for your application by executing the following command:
neo create-jmx-check --account <your account> --application <application name> --user <e-mail or user> --name "ActiveSessions" -object name <object name of the MBean you want to call> --attribute activeSessions --host <SAP HANA Cloud host>
This check allows you to view the number of active HTTP sessions per application (per Web context, the context is part of the object name). 

An example invocation that checks for context path /demo would look like:
neo create-jmx-check -a myaccount -b demo -u s1234567 -n "ActiveSessions" -O "Catalina:type=Manager,context=/demo,host=localhost" -A activeSessions --host neo.ondemand.com
Currently the HANA Cloud support for custom maintenance page and the disable command are non-existent but are working on this.

Wednesday, July 03, 2013

SAP HANA Cloud: Multiple connections deployment

Recently we found out that some networks use shaping for connections to SAP HANA Cloud. Shaping is not really a surprise but what astonished us was that the speed was limited to 700 KiB per second, making deployment of large archives a problem.

For example we had a case where a 140 MiB archive was uploaded for 1 hour and 30 minutes. This brought back the times when I downloaded Apple //e disks from BBS via 300 kbps modem for 5 hours.

To solve the issue we came back with the idea to use multiple connections and workaround the issue. This required changes in the client  (NEO CLI in SAP HANA Cloud SDK) and the server. 

Once we had the implementation completed we had the following data from our tests:

Slow network


The approach we used reduced the deploy time from ~30 minutes to ~3 minutes. As we can see the network in Vancouver can handle up to 8 connections  and increasing the number of connection does not make sense since the upload time increases.

Average network

In Palo Alto we managed to improve the deployment time from ~7 minutes to ~1 minute. This network allows for a great number of connections and the maximum transfer rates were reached with 30 connections.

Fast network


The network in Bulgaria allows for up to 3 connections. Even in this network we can see that the transfer rate is increased by increasing the number of connections.

Possible problems

Some networks will terminate the connections if a limit is reached or just hold the transfer until the connections number is under some threshold. Currently this will break the deployment.

When / How can I try this?

We will use 2 connections by default but you will be able to use the --connections parameter when deploying and:
  • revert to the old behaviour by specifying 1 connection
  • either stick with the default or increase to the maximum allowed 6 connections
Please keep in mind that we will revert to one connection if your deploy archive is under 5 MiB.

We expect this new feature to appear with the next update of SAP HANA Cloud SDK. To check if it is there just try the --connections parameter :)

Monday, July 01, 2013

SAP HANA Cloud: Automating your deployments

Why command line?

Because you can automate the cloud operations using shell scripts or continuous integration servers (Jenkins/Hudson for instance).

Additionally the command line interface (CLI) allows much faster development cycles, and allows us to accumulate the customer feedback than some visual tool. This is partially due to the lack of complicated UI design reuired to get a "native" IDE look and feel.

Application context root

You may have noticed that we used "ROOT.war" as archive name in my first blog. The reason for this was that I wanted to use the "/" context root and access the application without context path.

The name of the application determines the context path as mentioned in the official documentation.

Scaling your application

In SAP HANA Cloud you can scale your application:
  • vertically - adding more resources to a single application node
  • horizontally - adding more application nodes

To scale your application vertically you just need to specify the compute unit you'd like to use. HANA Cloud offers several compute unit sizes that increase in both CPU and memory.

Horizontal scaling is possible as well by deploying with two additional parameters:
  • minimum number of nodes that can handle the usual load of requests to your application
  • maximum number of nodes that you can afford to pay for :)

The horizontal scalling can be done for instance with the following deploy parameters:
neo deploy mytemplate.properties --minimum-processes 2 --maximum-processes 4
The above command line means that I want to have at least 2 application processes and my application can handle unlimited load with 4 processes (or most probably that I can afford to pay for just 4 compute units).

Please note that the trial landscape does not support neither vertical nor horizontal scaling as mentioned in the account types description.

Start / Stop / Restart

The start command actually provides you with an application process from the deployed binaries. You can repeat the start as many times as the number of your compute units. In other words - if you have bought 4 units you cannot start 5 application processes.

The default behaviour is that the start is asynchronous. This allows you to quickly start as many processes as you want without waiting for them to finish.

However if you want to be sure that the processes are started you can:
  • poll for their status (status command)
  • use the --synchronous flag

To stop the whole application use the stop command. At present you cannot stop a single application process but this is in the pipeline.

The restart command is a convinience shortcut and actually does the same as stop with --synchronous flag and subsequent start. The start can be synchronous or not (depends on the parameters you used).

Managing multiple applications

Probably you already know well the properties file you can use with all NEO CLI commands. As you noticed this properties file serves two purposes:
  • a placeholder for your settings
  • automation helper

What you probably missed is that the file can be used as well to manage:
  • multiple applications
  • multiple command parameters

Let's assume that the your file contains:
application = test
If you use neo start with this file you'll start one application process for the test application. However what if you used:
neo start myfile.properties --application demo
The result from the above command is that the demo application would be started. To allow this the CLI parameters take precedence over the properties file values.

The properties file can also hold parameters that are not needed by the currently executed command, but are meant to be used for subsequent commands. 

In this way we can use the same file for most of our needs and amend the parameters on the command line only in some rare cases.

Undeploy

Undeploy command does what its name says (I hope) - it simply deletes the binaries you uploaded with deploy command.

To successfully execute the command however you have to first stop your application. This is a way to protect not only ourselves from requests such as "I deleted my application and now I cannot start it any-more", but our customers as well from making such stupid mistakes.

Passwords

The password required for all of the above commands:
  • cannot be stored in the template properties file
  • can be passed as CLI parameter
We currently accept plain-text password, so we don't want to force users to store it in properties file. This is the shortest road to email the password to someone accidentally.

In the same time to enable automation we accept the --password parameter. Since this is a CLI parameter this has an important implication - the password requires some pre-processing if it contains special characters depending on the shell or command interpreter used.  

For example:
  • the ! character has to be escaped as ^! in Windows
  • if you have space in the password (pass word) you have to quote it ("pass word")

We redirect users to "your console/shell user guide on how to use special characters as command line arguments", but with Windows this seems to be a nightmare since there is almost no official resources, rather than for the retired Windows XP.

Proxy

The proxy settings can also be a pain so my advice here is to use the console to set the proxy. Setting proxy globally for the system, using a fancy UI, almost certainly means that the console is being left out of scope by the developers of this UI :)

You can check our minimal effort to explain how to use proxy with NEO CLI. Have in mind that the continuous integration servers, VCS tools or other executables may also require proxy set for the user you are running them with.

Sunday, June 30, 2013

Running existing applications on SAP HANA Cloud



In my recent blog I mentioned that I gave up on running my own server and now I'm entirely dependent on services provided via Internet in some cloud. 

Since I'm working for SAP and developing the HANA Cloud thingie I intend to start a series of blogs about my experience with the clouds and in particular the PaaS I'm working on. Not really a complete surprise I guess, so to catch up I'll try to post stuff that's nowhere to be found :) 

Having said all the above I want to make the statement that I'm expressing just my opinion. You probably noticed that this is not an official SAP site, I'm neither SAP spokesman nor my blog is the official SAP documentation.

So here it comes my first blog for SAP HANA Cloud ...


Isn't this already available in the documentation?

Well not exactly ...

Developing and deploying an application on SAP HANA Cloud is described in numerous blogs, articles and tutorials. But most of the stuff out there ignores the fact that there are cloud users that want to run an existing application on the platform. 

Some of them need to check if their application written for standalone servers will require modification (or will it run at all), and some of them just want to see how all this cloud stuff behaves without going through all the development hustle and pain. 

And what is this blog about then?

It turns out that if one wants to step into the shoes of an operator, administrator (or whatever it is called in your organization) then you're out of luck. There are bits and pieces scattered in various places so it becomes quite difficult to get a decent overview.

While my target is not to gather such an overview and provide it at once in the next year or so, I want to assemble some guidelines on how you can use HANA Cloud and where you can get the operator's information.

Get an account

To be able to use SAP HANA Cloud for free you'll need to create a developer account. It gives you access to the trial landscape for an unlimited period. 
The access to the trial landscape allows you to:
  • deploy and run your own application
  • get access to the Cloud Cockpit to do some operator's tasks

Download and configure the SDK

Go to the tools page and download the latest SDK. Extract the downloaded ZIP somewhere and then check the official documentation on how the console client can be configured to use proxy.

To try-out if everything works ok go to the directory where you extracted the SDK and start a console/terminal/command line.

Now use:
cd tools
neo
You should see a list with commands available with the SDK:


C:\neo-sdk-javaweb-1.30.8\tools>neo

SAP HANA Cloud Console Client

No arguments specified


Usage: neo [group:]command [parameters] [properties file]

Available commands:

 --- connectivity ---
  delete-destination
        Deletes destination or JKS file from SAP HANA Cloud
  get-destination
        Downloads destination or JKS file to local file system
  put-destination
        Uploads destination or JKS file to SAP HANA Cloud

 --- deploy ---
  deploy
        Deploys WARs or installable units
... <output trimmed> ...

Now let's try if your account works correctly. Go to <SDK root>/tools/samples/deploy_war/example_war.properties and open it with your favourite text editor. Change the first four keys in the properties file like this:

# Your account name
account=i024099trial

# Application name
application=test

# User for login to hana.ondemand.com.
user=i024099

# URL of the landscape admin server. Optional. Defaults to hana.ondemand.com.
host=hanatrial.ondemand.com
Please note that you'll have to use your own account and user names.

Save the changes in the properties file and check if the account works ok using the following command line:
C:\neo-sdk-javaweb-1.30.8\tools>neo deploy samples\deploy_war\example_war.properties
Enter your password when prompted and if everything works fine you should see:

SAP HANA Cloud Console Client


Requesting deployment for:
   application    : test
   account        : i024099trial
   source         : samples/deploy_war/example.war
   elasticity data: [1 .. 1]
   severity       : error
   host           : https://hanatrial.ondemand.com
   SDK version    : 1.30.8
   user           : i024099

Password for your user:

[Sat Jun 29 18:30:51 FET 2013] Deployment started......
[Sat Jun 29 18:31:01 FET 2013] Deployment finished successfully
What does "landscape" mean? Isn't the cloud a single and somehow lonely place?

If you haven't noticed I mentioned registering a "trial" account in "trial" landscape with some URL. I'm not sure adding "trial" to the URL will make the things better. Then the URL for SDK download was not really with the subdomain of the trial landscape. This perhaps means we have more than one landscape?

The answer is yes - we have two landscapes which can be used for different purposes:
  • trial landscape - for experimenting and writing blogs :)
  • factory landscape - for running business applications and making money out of it

The SAP HANA Cloud is supposed to be singular, but as we can see we have two clouds with the same name. So what's the difference?

To put it in a simple way - the trial landscape is not the big thing. Yes you can do a lot of things with it but you don't get all the bells and whistles of the real product. For example you don't get support (and the sweet support SLAs).

And ... you don't get to pay for it. So we'll continue using the trial in this blog.
 
Does "latest SDK" ring a bell? 

You probably noticed the "download the latest SDK" part above. This in practice means that we update the SDK once in a while. 

So if you want to use the latest and greatest you need to download it every 2 weeks. And yes - we try to keep it compatible.

The blog is indeed targeted more at operators, so why should you bother to download the SDK every 2 weeks then? The answer is that the SDK contains not only development stuff (APIs, local runtime, samples, ...) but the client part tools as well.

So even if you have an application you don't want to modify you may still want to download the latest SDK to get all the improvements poured inside on a bi-weekly base.

Some of my next blogs for HANA Cloud will shed some light on the tooling in the SDK.

I don't have an existing application. Now what?

Here are some places where you can find applications to try out:

The rules here are that the application:
  • does not require modification in Tomcat scripts, OS resources (plain WAR), native executables to do their job
  • is DB vendor neutral (no DB exotics)

Let's start. Or is it rather deploy and start?

If you've read this far you should have:
  • downloaded and configured SDK tools (so far only proxy settings)
  • application to run on SAP HANA Cloud

Running your application is actually a two phase process with HANA Cloud:
  • uploading the application (we call this deployment) 
  • starting an application process with the uploaded binaries

Let's start with the upload part for JPetstore:
  1. Download JPetstore
  2. Extract the mybatis-jpetstore-6.0.0.war
  3. Rename it to ROOT.war
  4. Copy the ROOT.war to <SDK root>/tools/samples/deploy_war
  5. Modify the example_war.properties file and change:

    source=samples/deploy_war/example.war 

    to 

    source=samples/deploy_war/ROOT.war

  6. Deploy the application using:
C:\neo-sdk-javaweb-1.30.8\tools>neo deploy samples\deploy_war\example_war.properties
SAP HANA Cloud Console Client


Requesting deployment for:
   application    : test
   account        : i024099trial
   source         : samples/deploy_war/ROOT.war
   elasticity data: [1 .. 1]
   severity       : error
   host           : https://hanatrial.ondemand.com
   SDK version    : 1.30.8
   user           : i024099


[Sat Jun 29 23:38:56 FET 2013] Deployment started...............................
........
[Sat Jun 29 23:40:45 FET 2013] Deployment finished successfully

Now we have the binaries uploaded it is time to start the application with the following command:
C:\neo-sdk-javaweb-1.30.8\tools>neo start samples\deploy_war\example_war.properties

SAP HANA Cloud Console Client


Requesting start for:
   application: test
   account    : i024099trial
   host       : https://hanatrial.ondemand.com
   synchronous: false
   SDK version: 1.30.8
   user       : i024099

Password for your user:

Start request performed successfully.

Triggered start of application processes.
Component status: STARTING
You may noticed that the command is actually asynchronous and just triggers the start process. To check the status of the application we've just attempted to start you may use the status command. Once the application is started we'll get the access points:

C:\neo-sdk-javaweb-1.30.8\tools>neo status samples\deploy_war\example_war.properties

SAP HANA Cloud Console Client


Requesting status for:
   application: test
   account    : i024099trial
   host       : https://hanatrial.ondemand.com
   SDK version: 1.30.8
   user       : i024099

Status: STARTED

URL: https://testi024099trial.hanatrial.ondemand.com

Access points:
  https://testi024099trial.hanatrial.ondemand.com

Let's try to access the provided URL. And voilà we have the JPetstore running.

Trying the same with a GWT sample:
  1. Go to gwt-2.5.1/samples/Showcase/war
  2. ZIP the content into ROOT.war
  3. Copy and overwrite the JPetstore 
  4. Deploy and start the application


What's in the next blog?
  • Why command line?
  • Scaling your application
  • Start/Stop/Restart
  • Managing multiple applications
  • Undeploy

id_rsa.pub: invalid format, error in libcrypto

After I upgraded my Linux and got Python 3.10 by default, it turned out that Ansible 2.9 will no longer run and is unsupported together with...