Monday, October 17, 2011

Developing and Deploying Java-Tomcat apps into Windows Azure

ref: http://blogs.msdn.com/b/cesardelatorre/archive/2010/09/12/developing-and-deploying-java-tomcat-apps-into-windows-azure.aspx


image
As you may know, Windows Azure is a multi-platform environment, so we can run many other languages/platforms other than .NET, like Java, PHP, Ruby, etc., and even using a whole Web-AppServer like Apache Tomcat, a DBMS like MySQL, and using IDEs like ECLIPSE. You can get more info about this here:
http://www.microsoft.com/windowsazure/interop/
clip_image001[5]
So, in this case, what I wanna show in this post is how you can develop and deploy a simple JAVA app (.JSP and SERVLET app) into Windows Azure (I’ll show it into the Windows Azure local Dev-Fabric but also into the real Windows Azure cloud in the Internet).
There are a few steps we need to accomplish, like installing Java SDK, Tomcat, etc. This is the software I installed in my dev machine (a Windows 7 machine):

SOFTWARE INSTALLATION

Regarding base software installation (Java, Tomcat & Eclipse), it is critical that you install versions that match each other, especially talking about the processor version (x86 or x64). In my case, all versions that I installed are x86 (Even though my Windows 7 is a x64 version, there’s no problem with that).
So, I installed the following software versions:
JDK 6 Update 21 (STANDARD) Windows x86
https://cds.sun.com/is-bin/INTERSHOP.enfinity/WFS/CDS-CDS_Developer-Site/en_US/-/USD/ViewProductDetail-Start?ProductRef=jdk-6u21-oth-JPR@CDS-CDS_Developer
Eclipse IDE for Java EE Developers (GALILEO SR2)
http://www.eclipse.org/downloads/packages/eclipse-ide-java-ee-developers/galileosr2

Apache Tomcat 6.0.29
IMPORTANT: Download the Windows zipversion. In my case, the ‘32-bit Windows zip’ version, from the following URL:
http://tomcat.apache.org/download-60.cgi
Do not install the ‘32-bit/64-bit Windows Service Installer’, as this is the TomCat Windows Service version, and you won’t be able to deploy a Windows/NT Service in Windows Azure PaaS. We need the ‘process/command’ version of Apache Tomcat.
Then, just unzip Tomcat in your selected directory.
In order to run Tomcat, we also need to set the JRE_HOME environment Variable to the JAVA SDK directory. In my case, “C:\Program Files (x86)\Java\jdk1.6.0_21”:
clip_image003[6]
Install the Windows Azure Tools for ECLIPSE:
Once you have Eclipse already installed, run Eclipse and install the ‘Windows Azure Tools for ECLIPSE’ from Eclipse itself:
--> Eclipse-->Help-->Install New Software --> Download Windows Azure Tools for ECLIPSE by clicking on Add, giving a name ‘Windows Azure Tools for Eclipse’ --> Location: http://www.windowsazure4e.org/update --> SelectWindows Azure Java SDK, like you can see down-below:
clip_image005[5]
Then, almost “Next-Next-Next”…
Windows Azure Tomcat Solution Accelerator
Download and unpack the ‘Windows Azure Tomcat Solution Accelerator’. You can get it from here (in my case, I used the x86 version, to match the other x86 versions):
http://code.msdn.microsoft.com/winazuretomcat/Release/ProjectReleases.aspx?ReleaseId=3550

Java Demo-App creation

- Create a New Project in Eclipse: File-->New-->Other-->Web-->Dynamic Web Project --> Set a name --> “HelloWorld” --> Create
- Then we need to add the external JARs for JSP and Servlets support:
- Project Properties --> Java Build Path --> Libraries --> Add External JARs --> My Compurtes-HardDrive--> Eclipse folders--> C:\JavaEnv\eclipse\plugins -->javax.servlet.jsp_2.0.0.v200806031607.jar AND javax.servlet_2.5.0.v200910301333.jar files:
clip_image007[5]
- If we would want to access Windows Azure storage (Blobs/Queues/Tables) from Java we’d need to add the following JAR: org.soyatec.windows.azure.java_1.0.0.201002091324
- But in this case I just want to run a standard and simple JSP and SERVLET App.
- Then, OK-->OK.
- Now, we add a simple .JSP page:
- Expand Project --> New --> .JSP --> Name: “index.jsp” -->
- We add its code, very simple HTML form code that will call/execute our future SERVLET:
clip_image009[5]
- Now, we créate our SERVLET project:
- From Eclipse, New-->Other-->Web-->Servlet
- Dialogo SERVLET -->
o Java Package --> Specify a selected name for your Java Package, like “MyJavaPackage”
o Class Name: --> Specify “HelloWorldServlet”
- Next
- Add description --> “Simple demo Servlet App”
- Next
- Methods --> DoPost & DoGet
- --> Click FINISH
- Now, within the doPost() method we introduce our ‘process’ code. We could put it in a different method, but for this short demo, we can directly put it there, within the doPost():
package MyJavaPackage;
import java.io.IOException;
import java.io.PrintWriter;
import javax.servlet.ServletException;
import javax.servlet.http.HttpServlet;
import javax.servlet.http.HttpServletRequest;
import javax.servlet.http.HttpServletResponse;
/**
* Servlet implementation class HelloWorldServlet
*/
public class HelloWorldServlet extends HttpServlet {
private static final long serialVersionUID = 1L;
public HelloWorldServlet() {
// TODO Auto-generated constructor stub
}
protected void doGet(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException {
// TODO Auto-generated method stub
}
protected void doPost(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException {
response.setContentType("text/html;charset=UTF-8");
PrintWriter out = response.getWriter();
String fullName = request.getParameter("fullname").toString();
out.println("");
out.println("");
out.println("Bienvenido");
out.println("");
out.println("");
out.println("
Hi " + fullName + ", greetings from JAVA Server environment!!
");
out.println("");
out.println("");
out.close();
}
}
- You can see similar code in Eclipse:
clip_image011[4]
- Import PrintWriter option using the rightClick mouse context.
- START Aapche TomCat up:
- From the Command-Prompt, within the Tomcat’s directory, just type ”startup.bat”:
clip_image013[4]
- From ECLIPSE, I EXPORT my Project as a WAR FILE:
- RightClick on Project-->Export-->WAR FILE
o Destination, in my case: E:\JavaEnv\apache-tomcat-6.0.29-windows-x86\webapps --> and ROOT.WAR as my filename, so it will run as default webapp.
o Check Overwrite Existing Files
clip_image015[4]
- And finally, test the default page (my .JSP page should be executed):
clip_image017[4]
- Write any name and submit to the SERVLET, and see results:
clip_image019[4]
- OK, so our Java app is running now on a regular Apache TomCat (In a Apache Command process).

Deploying the App to Windows Azure local Development-Fabric

- Open CMD from Windows Azure SDK (Programs--> Windows Azure SDK v1.2--> Windows Azure SDK Command Prompt)
- Move to the directory where I have unpacked the Windows Azure Tomcat Solution Accelerator. In my case:
- cd E:\JavaEnv\WATomcatAccelerator_x86\Tomcat
clip_image021[4]
- We have 3 Command files (Buildme.cmd, Packme.cmd and Runme.cmd):
clip_image023[4]
Buildme.cmd creates/builds our solution. In this step we’ll need to provide the Apache-tomcat path, and the Java runtime path. It gives us complete control as we can deploy any particular Tomcat or Java runtime version.
- The Runme.cmd command deploys and runs our app and base software into Windows Azure DEVELOPMENT fabric. It moves Tomcat and Java runtime into WA dev-fabric local storage, and also, our app.
- Next thing we need to to is to configure TomCat to listen in the right TCP port. In WA we have a load-balancer for all our VM instances, but we need to run Tomcat in any other port than port 80. We need to configure it so the Load-Balancer can ‘talk’ to TomCat.
- Next thing we need to do is to start TomCat Service.
- And finally it’s going to monitor TomCat, so if for some reason TomCat crashes, then WA will be notified and we can re-start a WE node, etc.
STEP 1: Building the solution: Run the BUILDME.CMD
- The Buildme.cmd batch file which is available in the root folder of the solution generates the CSX folder. This batch file should be executed only from the Windows Azure SDK command prompt. On executing this, it checks if the Tomcat and Java binaries are present in the required directories. If not, it prompts for the path to the binaries and the user needs to give the path to the binaries folder (for e.g. E:\Binaries\Tomcat). On a successful build, it generates the CSX folder under the root folder of the solution. This is required to run the solution in development fabric.
- It first asks for the tomcat binaries, so we provide it (E:\JavaEnv\apache-tomcat-6.0.29-windows-x86):
clip_image025[4]
- Then, it asks for the Java runtime binaries path, in my case “C:\Program Files (x86)\Java\jre6”:
clip_image027[4]
STEP 2: Running the solution (WA development fabric): Run the RUNME.CMD
- Now we build the app with the Runme.cmd command file:
clip_image029[4]
- It deploys JRE, Tomcat and our App into the WA Dev-Fabric, and finally it starts TomCat:
clip_image031[4]
- Now we go to the App-Dev-Fabric where we can see it running:
clip_image033[4]
- Then we can go and use any Browser to run our app. We need to specify the IP address and TCP port where WA-Dev-Fabric started TomCat, that in my case is, http://127.0.0.1:81:
clip_image035[4]
- So finally we can see our Java app running on TomCat and Windows Azure local Development Fabric:
clip_image037[4]
- Next step would be uploading our app to Windows Azure cloud in the Internet.
STEP 3: Uploading the solution to Windows Azure cloud in the Internet
- The Packme.cmd batch file which is available in the root folder of the solution generates the Tomcat.cspkg for the solution under the root folder itself. This batch file should be executed only from the Windows Azure SDK command prompt. . This package along with the service configuration file (.cscfg) is used for deploying the solution on the cloud. The package file and service configuration file are required to deploy the solution on cloud. Once the package is deployed on cloud, the application can be accessed via the following URLs.
ApplicationStaging URLProduction URL
Admin pagehttp://.cloudapp.nethttp://.cloudapp.net
NOTE: The ServiceDefinition.csdef and ServiceConfiguration.cscfg files should be kept under the root folder of solution for the Buildme.cmd, Packme.cmd and Runme.cmd to execute properly. So if any change has been done to these files in the solution, the latest should be copied to the root folder too.
- So, when we run the Packme.cmd, we generate the WA package needed to upload it:
clip_image039[4]
- We can see in our directory that there’s been a new .cspkg Windows Azure file generated for us:
clip_image041[4]
- Take into account that because of we need to upload JRE & TomCat, our app’s package is quite heavy (around 56 Mb), and therefore, deployment upload will take long. One workaround for that is uploading it just once to a Windows Azure BLOB and then we can deploy it from there much faster (in case we delete/upload same app’s version many times).
- So now, we just need to upload it to the Production or staging environment in WA cloud through the WA dev portal:
clip_image043[4]
- But, I really recommend uploading it first to a Windows Azure BLOB, for instance, using the Azure Storage Explorer:
clip_image045[4]
- Then, in this other way, the deployment would be like the following:
clip_image047[4]
- And selecting the files from within a BLOB container instead of my local PC:
clip_image049[4]
- Then, we can see it uploading it (from local or from Blob container):
clip_image051[4]
- And after a few minutes (after uploading the package, which would be much faster from Azure Blob), we’ll see it deploying:
clip_image053[4]
- Once it is deployed, we need to start the node/virtual machine:
clip_image055[4]
- And finally, after some more minutes (wait until it reaches the ‘Ready’ state, starting on ‘Initializing’ state, then ‘Busy’ state and finally ‘Ready’ state), it will be up & running in our Windows Azure cloud:
clip_image057[4]
-
- And now, just execute the app from your Windows Azure URL. In my case: http://javatomcatdemo.cloudapp.net/
clip_image059[4]
- And then my SERVLET execution:
clip_image061[4]
- COOL!. Now you could try with any other Java-TomCat App. (Web-Service, etc.).

Related Links

WINDOWS AZURE INTEROP
http://www.microsoft.com/windowsazure/interop/
INTEROPERABILITY BRIDGES - LIST
http://www.interoperabilitybridges.com/Projects.aspx

TomCat Solution Accelerator
http://code.msdn.microsoft.com/winazuretomcat
AzureRunMe
http://azurerunme.codeplex.com/
Windows Azure Tools for Eclipse (PHP)
http://www.interoperabilitybridges.com/projects/windows-azure-tools-for-eclipse.aspx
Windows Azure SDK for Java
http://www.interoperabilitybridges.com/projects/windows-azure-sdk-for-java.aspx
AppFabric SDK for JAVA
http://www.jdotnetservices.com/
TomCat Logs
http://code.msdn.microsoft.com/azurediag
Windows Azure Mediawiki MySQL Solution Accelerator
http://code.msdn.microsoft.com/winazuremediawiki
Windows Azure Jetty Solution Accelerator
http://code.msdn.microsoft.com/winazurejetty
eBay’s page for iPad listings — http://ipad.ebay.com— (hosted on the public Windows Azure platform). You may need iPad to view the page.

Saturday, October 15, 2011

Introducing Amazon Web Services to a Windows Azure developer

Ref: https://windowsclient.net/blogs/anshulee/archive/2010/10/14/introducing-amazon-web-services-to-a-windows-azure-developer.aspx


If you have been working on Windows Azure and have now decided to ramp up on Amazon Web Services, this note can help you map Amazon and Azure and understand where one stands wrt the other..
Just like the Azure platform  is broken up into SQL Azure, AppFabric (Service Bus and Access Control) and Windows Azure (Tables, Blobs and Queues), Web and Worker roles, Azure CDN etc etc  and you had to ramp up on all those terms when you started up on Azure, Amazon doesn’t make life easier either:-)
The MAIN difference between Amazon and Azure is of course the IAAS vs PAAS and if you haven’t read an earlier blog on this then i suggest you read it now..
What i am attempting to do here is introduce you to the different aspects of Amazon Web Services(AWS) and at the same time i will mention how it is similar(note “similar” and not compulsorily “same”) to a feature available in Azure..
Lets start with the most frequently heard term..
  • Amazon EC2…
So you keep reading about Amazon EC2 and at least i used to think each machine in that data centre is called an Amazon EC2..nope..Amazon EC2 is a web service that enables you to launch and manage server instances in Amazon's data centers using APIs or available tools and utilities .
So what you launch is a “service instance” also called as Amazon Machine Images( AMI) and you launch and manage them using EC2. These AMIs are also called EC2 Instances but you must be clear that EC2 is a webservice and not the instance itself!!
  • Amazon Elastic Block Store(EBS) :(Azure Counter part:- None)
Amazon Elastic Block Store (Amazon EBS) offers persistent storage for Amazon EC2 instances.(Something we sorely miss in Azure and leads to lot of heartache!!!) Amazon EBS volumes provide off-instance storage that persists independently from the life of an instance. Amazon EBS volumes offer greatly improved durability over local Amazon EC2 instance stores, as Amazon
EBS volumes are automatically replicated on the backend (in a single Availability Zone).
Elastic IP addresses are static IP addresses designed for dynamic cloud computing. An Elastic IP address is associated with your account, not a particular instance, and you control that address until you choose to explicitly release it. Unlike traditional static IP addresses, however, Elastic IP addresses allow you to mask instance or Availability Zone failures by programmatically remapping your public IP addresses to any instance in your account.
  • Auto Scaling :- (Azure Counterpart:- None)
Auto Scaling allows you to automatically scale your Amazon EC2 capacity up or down according to conditions you define. With Auto Scaling, you can ensure that the number of Amazon EC2 instances you’re using scales up seamlessly during demand spikes to maintain performance, and scales down automatically during demand lulls to minimize costs. Auto Scaling is enabled by Amazon CloudWatch and available at no additional charge beyond Amazon CloudWatch fees
Though Auto Scaling is possible in Azure it is left to the developer to implement autoscaling algorithms for the same. The option of setting trigger conditions and creating Auto- Scale groups etc that Amazon offers is every Azure developer’s dream!!
  • Amazon Virtual Private Cloud :- (Azure Counterpart:- None)
Amazon VPC enables enterprises to connect their existing infrastructure to a set of isolated
AWS compute resources via a Virtual Private Network (VPN) connection
Not applicable to Azure since its a PAAS offering
  • Amazon CloudWatch :-(Azure Counterpart:- IIS Logs)
Amazon CloudWatch is a web service that provides monitoring for AWS cloud resources, starting with Amazon EC2. It provides you with visibility into resource utilization, operational performance, and overall demand patterns—including metrics such as CPU utilization, disk reads and writes, and network traffic
Azure IIS Logs don’t really come close to the functionality provided by CloudWatch and needs to be configured in code, moved to blobs and then pulled down.
  • Amazon Cloud Front :- (Azure Counterpart:- Azure CDN)
Amazon CloudFront is a web service for content delivery. It integrates with other Amazon Web Services to give developers and businesses an easy way to distribute content to end users with low latency, high data transfer speeds, and no commitments.
Requests for your objects are automatically routed to the nearest edge location, so content is delivered with the best possible performance
Being a typical CDN its pretty comparable to Azure CDN
  • Amazon SimpleDB :- (Azure Counterpart:- Windows Azure Table Storage)
Amazon SimpleDB is a highly available, scalable, and flexible non-relational data store.Unbound by the strict requirements of a relational database, Amazon SimpleDB is optimized to provide high availability, ease of scalability, and flexibility with little or no administrative burden
Pretty comparable to Azure Table Storage, though it will be interesting to see which one performs better in times of “stress” and partitioning requirements..
  • Amazon RDS :- (Azure Counterpart:- SQL Azure)
Amazon Relational Database Service (Amazon RDS) is a web service that makes it easy to set up, operate, and scale a relational database in the cloud. It provides cost-efficient and resizable capacity while managing time-consuming database administration tasks
Amazon RDS gives you access to the full capabilities of a familiar MySQL database. This means the code, applications, and tools you already use today with your existing MySQL databases work seamlessly with Amazon RDS
Amazon RDS is MySQL on the Cloud and SQLAzure is SQL server on the Cloud!! 
  • Amazon Simple Queue Service (Amazon SQS):-(Azure Counterpart:-Windows Azure Queues)
Amazon Simple Queue Service (Amazon SQS) offers a reliable, highly scalable,hosted queue for storing messages as they travel between computers. By using Amazon SQS, developers can simply move data between distributed components of their applications that perform different tasks, without losing messages or requiring each component to be always available
Again quite comparable to the functionality delivered by Azure Queues
  • Amazon Simple Notification Service (Amazon SNS):- (Azure Counterpart:- Azure Service Bus eventing and subscription Model)
Amazon Simple Notification Service (Amazon SNS) is a web service that makes it easy to set up, operate, and send notifications from the cloud. It provides developers with a highly scalable, flexible, and cost-effective capability to publish messages from an application and immediately deliver them tosubscribers or other applications. It is designed to make web-scale computing easier for developers.
Directly comparable to the Evening and Subscription model available in Azure App Fabric Service Bus..The service namespace coupled with message buffers make powerful publish-subscribe models.
  • Elastic Load Balancing(Azure Counterpart:- Inbuilt in the Azure fabric)
Elastic Load Balancing automatically distributes incoming application traffic across multiple Amazon EC2 instances. It enables you to achieve even greater fault tolerance in your applications, seamlessly providing the amount of load balancing capacity needed in response to incoming application traffic.
This is of course inbuilt in the Azure framework.
  • Amazon Simple Storage Service (Amazon S3):-(Azure Counterpart:- Windows Azure Blob Storage)
Amazon S3 provides a simple web services interface that can be used to store and retrieve any amount of data, at any time, from anywhere on the web. It gives any developer access to the same highly scalable, reliable, secure, fast, inexpensive infrastructure that Amazon uses to run its own global network of web sites.
Similar to Blob functionality
Hope this helps clear up the air a bit …coming up next..a deep dive into Amazon EC2 Instances…oops AMIs:-)

Sunday, October 9, 2011

ref: http://www.wiimoteproject.com/bluetooth-and-connectivity-knowledge-center/a-summary-of-windows-bluetooth-stacks-and-their-connection/?PHPSESSID=7c92becb1e21535fefa23bebf93fea48

Wiimote Friends

This is summary of the six most common Bluetooth stacks. I will try to cover the fundamentals of each Bluetooth stack and known challenges/issues. Most Bluetooth radios are bundled with their own software & drivers (Bluetooth stack), some interchangeability is possible but it is mostly trial and error.

The Wiimote was NOT designed to be connected to a PC so connection is not as easy as we would like. WiimoteConnect with the Microsoft stack is currently the easiest and best setup. However, the latest version of Bluesoleil is quite easy to connect and Widcomm is the most stable although requiring a few steps to connect.

There is no need to enter any pairing code because the Wiimote was not designed to pair to a PC.

 - Microsoft (XP/Vista/MAC Bootcamp)
 - BlueSoleil
 - Widcomm
 - Dell/HP/ect Laptop Inbuilt Stack
 - Toshiba
 - Logitech Stack

Most Bluetooth devices are bundled with their own software, some interchangeability is possible but is mostly trial and error.

A summary of compatible devices can be found here:
http://wiibrew.org/wiki/List_of_Working_Bluetooth_Devices

If you have anything to add to this knowledge base please PM me.

If you have a BT connectivity issue please:
1. Check your device is compatable (see links above)
2. Read this guide to make sure correct stack version etc & no logitech
3. Follow the correct connection protocol (ses links with each stack)
4. Read the self help guide & do what you feel confident doing http://www.wiimoteproject.com/bluetooth-and-connectivity-help-center/bt-problems-a-self-help-guide-**-please-read-before-posting-problem**/
5. Post your problem to the Bluetooth help centre http://www.wiimoteproject.com/bluetooth-and-connectivity-help-center/


Windows Default Bluetooth Stack
Overview:



Tuesday, October 4, 2011

Barcode


Data Matrix is one of the smallest and most dependable barcode symbologies(smaller than QR Code). The size difference of popular barcode types is compared in the Barcode Symbology Evaluation and Test Sheet.

GS1 Data Matrix barcode symbols do not encode the last digit of the GTIN, which is a check digit. If the full 14-digit GTIN is needed after reading the symbol, it may be generated with a MOD10 calculation.

According to GS1 specifications, the first FNC1 character should be decoded as "]d2" and any additional FNC1 characters will be decoded as (ASCII 29). The function is usually only visible when scanned with the Barcode Scanner ASCII String Decoder. Not all scanners properly decode the first FNC1 character as "]d2".

GS1 DataMatrix uses a special start combination to differentiate the GS1 DataMatrix symbol from the other Data Matrix ECC 200 symbols. This is achieved by using the Function 1 Symbol Character (FNC1) in the first position of the data encoded. It enables scanners to process the information according to the GS1 System Rules. 
The FNC1 is encoded in two separate ways within GS1 DataMatrix:
Start character (ASCII 232)
Field Separator (ASCII 29: )
When used as part of the special combination – use ASCII 232
When used as a field separator (see 2.2.2 Concatenation,) - ASCII 29 :

Using GS1 DataMatrix, it is possible to concatenate (chain together) discrete Application Identifier (AIs) and their data into a single symbol. When the AI data is of  pre-defined length, no field separator is required when the next Application Identifier and data are concatenated immediately after the last character of the previous AI data. Where the AI data is not of pre-defined length, it must be followed by a field separator when concatenating more AIs. 

Global Trade Item Number (GTIN) is an identifier for trade items developed by GS1. Such identifiers are used to look up product information in a database (often by inputting the number through a bar code scanner pointed at an actual product) which may belong to a retailer, manufacturer, collector, researcher, or other entity.

The Global Trade Item Number (GTIN) only identifies the product type or stock-keeping unit (SKU) rather than an individual instance of a particular product type. To ensure that an EPC always uniquely identifies an individual physical object, in the case of a GTIN, the EPC is constructed as a serialised Serialised Global Trade Item Number (SGTIN) by combining a GTIN product identifier with a unique serial number.

The Universal Product Code (UPC) is a barcode symbology. Its most common form, the UPC-A, consists of 12 numerical digits, which are uniquely assigned to each trade item.

An EAN-13 (European Article Number) barcode is a 13 digit (12 data and 1 check) barcoding standard which is a superset of the original 12-digit Universal Product Code (UPC) system developed in the United States.