Saturday, September 24, 2011

Simple Game Development Using Java

Balloon Game :

A simple Java game using the simple standard Java APIs without usage of any engine..

It is very small 10KB (+15 resources like images and sound) for educational purposes to show the Java developers the simple ways to use Java APIs in developing nice scenes without any complexity in the code.

The idea is controlling a balloon away from pins that can cause the balloon to explode.

In the next gaming post , i will show how to do complex Java games using 3D engines and/or Java3D.

The game scenes are entirely configurable , in this release i have made 11 scenes (from 0 to 10) but you can edit the property file to add million of scenes without the need to change any line of code in the game.

Download URL:

Saturday, September 17, 2011

Performance Tuning of ATG application

Before you start troubleshooting ATG application issue, you may do the following:
-Get the problem definition.
-Gather all possible information about it like affected transaction , when they detect it on load or with single page, in certain time or all over the day, in one managed server or all servers , when it start to occur , what changed before that time, any applied patch or new driver, ...etc...
-Get all possible tools and methods ready to use.
-Get the application code in-hands.
-Start building your investigation plan.
-Plan can have trial elements, gather information elements, possible fix elements and permanent fix/conclusion.
-If the issue is in the production env, try to replicate it in other env, so you can try fixing it without any possible business impact.

Here is a guide for your investigation and setup of the production environment for ATG application build from my experience plus ATG documentations :

1) ATG Application side Recommendations:

-Enabling liveconfig Settings:
When you’re ready to deploy your Nucleus-based application
in a production environment, enable the settings in the liveconfig configuration layer. This layer overrides many of the default configuration settings with values that are more appropriate for a deployed site. For example, the liveconfig configuration layer improves performance by reducing error checking and detection of modified properties files.

To enable liveconfig, you can use the –liveconfig argument for runAssembler
or add the following line to the WEB-INF/ATGINF/
dynamo.env file in the atg_bootstrap.war module of your EAR file:

a) Disabling Checking for Changed Properties Files:
This property controls whether or how often ATG rereads .properties files or .java files the next time that an instance of a given component is created (non-global)
The default is 1000. This feature is useful during development, but we recommend disabling it once a site goes live for better performance. The value -1 disables the reloading of properties and .java files altogether.

b) Disable Performance Monitor:
/atg/dynamo/service/PerformanceMonitor) can be used to gather statistics about the performance of specific operations in ATG components.
You can disable the Performance Monitor by setting its mode property to 0:
The Performance Monitor is disabled in the liveconfig configuration layer.

c) Adjusting the pageCheckSeconds Property:
ATG’s Page Processor compiles JHTML pages into .java files (JSP compilation is handled by your application server). The page processor, located at /atg/dynamo/servlet/pagecompile/PageProcessor, checks for new Java Server Pages that need to be compiled. You can improve performance by increasing the Page Processor’s pageCheckSeconds property. The page compile servlet uses this property value to determine whether to check for new Java Server Pages that need to be recompiled. If a request occurs within this time interval (measured in seconds) for the same page, ATG will not check the date on the file. This improves performance in serving pages.
A value of 0 causes ATG to check for new pages on each request. The default value is 1. The liveconfig value is 60.

-Fine-Tuning JDK Performance with HotSpot
Refer to Oracle Hot Sport performance tuning for more details.

-Configuring for Repositories:
a) Enable Caching:
Specify the correct values of cache according to your data size, example of this calculation:

b) Setting Cache Modes:
Select the proper cache mode
Remember that if you use locked mode caching, you must also enable lock manager components.

c) Populating Caches on Startup:
This benefit may come at the cost of slower startup times.
You can pre-populate caches in a SQL Repository by using tags in a repository definition file.

d) Configuring Repository Database Verification for Quicker Restarts:
By default, each SQL Repository component verifies each of the tables in its database on startup with a simple SQL query. These verification queries can slow the ATG startup routine.
you may wish to set the updateSchemaInfoCache property to true in your atg.adapter.gsa.GSARepository components, such as /atg/dynamo/service/jdbc/ProfileAdapterRepository.

e) Configure proper caching timings:
**item-cache-timeout :
This attribute defines how long (in milliseconds) a repository item can exist in the item cache without having been accessed before it needs to be reloaded from the database. Effectively, there is a "last touched" timestamp associated with each item cache entry; if the time since that item was last touched is greater than the item-cache-timeout setting, then it's properties are loaded from the database instead of the in-memory cache, which is then updated with the values from the database.

**item-expire-timeout :
This attribute defines how long (in milliseconds) a repository item can remain in the item cache before it needs to be reloaded from the database. Effectively, there is a "time loaded" timestamp associated with each item cache entry; if the time since that item was cached is greater than the item-expire-timeout setting, then it's properties are loaded from the database instead of the in-memory cache, which is then updated with the values from the database.

**query-expire-timeout :
This attribute is the same as the item-expire-timeout but for query cache entries. This attribute defines how long (in milliseconds) a query can remain in the query cache before it needs to be reloaded from the database. Effectively, there is a "time loaded" timestamp associated with each query cache entry; if the time since that entry was cached is greater than the query-expire-timeout setting, then it's properties are loaded from the database instead of the in-memory cache, which is then updated with the values from the database.

Note that the query-expire-timeout attribute only applies when you have query caching enabled.

-Setting Logging Levels :
If you want to disable logging entirely, or specify different logging levels, you can do that in the file. For example:

**Your application code must follow the standard in checking on the level before logging the log messages.

-Limiting Initial Services for Quicker Restarts
This is configured using initialServices property of the /atg/dynamo/Initial component.

-Disabling Document and Component Indexing
The ACC creates and maintains indexes of documents and components. For sites with large numbers of documents or components, indexing can take time and CPU resources. Once your site is deployed and relatively stable, you may want to limit or eliminate the indexing of documents or components.
The document and component indexes are maintained incrementally once built, and are rebuilt completely once a day at 1 a.m. by default. An index is rebuilt at startup only if it does not exist at all.
You can selectively exclude portions of the document tree from indexing by adding absolute pathname prefixes to the excludeDirectories property of the /atg/devtools/DocumentIndex component.
The same is true for component indexing, but the component is /atg/devtools/ComponentIndex instead. To improve performance on a live site, you can turn off all document and component indexing by setting the enabled property of the DocumentIndex and ComponentIndex components to false.

-Compress Content:
Compress pages (remove white-spaces) and static content (pictures, JS, CSS) which will speed up download time from the browser in your Web Server.
Compressing HTML/JavaScript/CSS/XML/JSON content can significantly reduce response times. GZIP reduces the size of responses by between 50% and 80%, depending on the type of content. Only turn on GZIP compression for text/html, text/plain, text/json, text/css, and application/x-javascript mime types.
To verify that gzip is being used, “Accept-Encoding: gzip,deflate” should show up in the request header and “Content-Encoding: gzip” should show up in the response header.

-Re-structure your page to have:
*Move .css to the top of all pages.
*Move script files to the button of all pages whenever possible.
*Move all embedded script into external files.

-Ajax Cache:
Cache your ajax response whenever possible to speed us the user experience whenever the results can be cached.

-Pre-Compiling JSPs:
Might slow deployment/server start-up but will speed up the 1st page request.

-Session Stickiness:
It’s important that session stickiness is working properly. Not having it working could result in sessions being continually restored after each request.

-Keep ATG Patched:
When possible, the latest version of ATG should be used. As cumulative patches are released, they should be applied.

-HTTP Connection Reuse:
Using the “Keep-Alive” header allows browsers to reuse the same TCP connection for multiple request/response pairs. Not re-establishing TCP connections for each HTTP request/response helps Reduce network traffic, Reduce the load on the SSL accelerator and Improve performance due to not having to setup and tear down TCP connections for each HTTP request/response.
A good Keep-Alive value for CSC is 300, which is 5 minutes.
Both Internet Explorer 6 and 7 forcibly terminate TCP connections after one minute, regardless of what Keep-Alive is set to. See for a workaround.

2) Server\JVM\Operating system Configurations:
-Setup JDBC connections with number matching the expected concurrent users of the site.
-Setting the JTA timeout to 120 seconds whenever possible.
-Increase the max concurrent open files by the operating system.
-Compress the web application output.
-Configure Max Thread Stuck to a propert time to catch where the threads are usually stucked..

3) Using Performance measure tools:

A) ATG built-in Features:

1) Performance monitor:
*Adding PerformanceMonitor Methods to your Code
To enable the Performance Monitor to monitor a section of your Java code:
1. Import the atg.service.perfmonitor.* package.
2. Declare an opName parameter to label the section of the code. This parameter is displayed in the Performance Monitor page under the Operation heading.
3. (Optional) Declare a parameter name if you want to gather data on individual executions of an operation.
4. Call the startOperation method at the beginning of the operation whose performance you want to be able to measure.
5. Call the endOperation method at the end of the operation whose performance you want to be able to measure.
6. Optionally, call the cancelOperation method if an exception occurs. This causes the results of the current execution to be ignored.

These methods can be nested with different or the same opNames.
PerformanceMonitor.startOperation(opName, parameter);
try {
... code to actually render foo.jsp
} catch (Exception e) {
PerformanceMonitor.cancelOperation(opName, parameter);
exception = true;
} finally {
if (! exception)
PerformanceMonitor.endOperation(opName, parameter);

*Performance Monitor Modes:
You can set the Performance Monitor’s operating mode by setting the mode property of the component at /atg/dynamo/service/PerformanceMonitor:
disabled 0 (default)
normal 1
time 2
memory 3
You should use 2=Time to get accumulated results, also try to enable it after warm up the site to exclude extreme reading for loading caches ,...etc.

*View the Results:
You can view the information collected by the Performance Monitor on the Performance Monitor’s page
of the Dynamo Administration UI at:

2) Using the VMSystem Component :
The ATG component located at /VMSystem provides a way for you to access the Java memory manager.
You can monitor the status of the Virtual Machine and call methods on it. An interface to the VMSystem component is included in the Dynamo Administration UI at:
From this page, you can conduct the following VM Operations:
• Perform garbage collection
• Run finalizations
• Show memory information
• List system properties
• List thread groups
• List threads
• Stop the VM

3) Sampler:
When testing your site, it is useful to automatically sample performance to understand throughput as a function of load. ATG includes a Sampler component at /atg/dynamo/service/Sampler.
Starting the Sampler You can start the Sampler component by opening it in the ACC and clicking the Start button.
You can also start the Sampler component from the Dynamo Administration UI by requesting this URL:
The first time you request this page, ATG instantiates the Sampler component, which begins recording statistics.
You can configure ATG to start the Sampler whenever ATG starts by adding the Sampler to the initialServices property of the /atg/dynamo/service/Initial component:

The Sampler outputs information to the file /home/logs/samples.log. For each system variable that it samples, it records the following information in the log file:
• the current value
• the difference between the current value and the value recorded the last minute
• the rate of change of the value
You can adjust values recorded by the Sampler, but the default set is comprehensive in monitoring ATG request handling performance.

B) Log Files:

* Access Logs:
You may enabled access logs on your application sercer or web server to ensure that the time is really consumed inside your application not in network traffic (download time)..

* Application Logs:
Application logs could point to system calls or external systems timeout or DB issues , exceptions , .... alot of useful information could be detected from the application logs.
You might see a “server not responding” message or an OutOfMemory error

C) Thread Dump:
Thread dumps can be useful to see where these threads are
waiting. If there are too many threads waiting, your site’s performance may be impaired by thread context switching. You might see throughput decrease as load increases if your server were spending too much time context-switching between requests. Check the percentage of System CPU time consumed by your JVM. If this is more than 10% to 20%, this is potentially a problem. Thread context switching also depends
in part on how your JVM schedules threads with different priorities

Thread dumps can be taken from Admin console of the command line.
Thread dumps could also point easily to deadlocks and infinite loops and other issues related to bad buggy code.

D) Garbage Collection:
Check the JVM parameters that affect Garbage Collection including gc policy, try to set max and min heap size the same value.
Check the garbage collections Logs; typically when you see spikes or abnormal behaviors, the most important is the full garbage collection runs.

*Phases that stop the threads with the parallel garbage collection algorithm:
tail -f | grep 'Full'
*Phases that stop the threads with the Concurrent Mark Sweep algorithm:
tail -f | grep -E '(CMS-initial-mark)|(Rescan )|(concurrent mode
failure)|(Trying a full collection)|(promotion failed)|(full)|(Full)'

If excessive pausing is noticed, one of several things could be wrong:
- JVM arguments may need to be tuned
- There may be a memory leak
- Repository/droplet/atg.service.cache.Cache caches may be over-utilized
- Load balancing could be not working properly, which would result in more sessions than normal hitting one instance.

While thread pauses are a normal part of garbage collection, excessive pauses must be minimized.

E) Cron Jobs:
Check the running cron-jobs on the server and try to only setup them to run during non-load hours of the servers.

F) Profiling Tools:
-Netbeans Profiler
-Eclipse TPTP
Any other profiling tools that give you detailed information about consumed in any operation and memory tracing ,...etc.

G) Load testing Tools:
*) URLHammer (ATG load tool):
To run the URLHammer program:
1. Set your CLASSPATH to include the directory /DAS/lib/classes.jar.
2. Run the following command:
java [arguments]
For example:
java http://examplehost:8840/ 5 10 -cookies
This creates five different threads, each of which represents a separate session that requests the specified
URL 10 times (50 requests total).

You can also run a script by editing the format yourself or by using RecordingServlet to create it..

*) Apache JMeter (open source)
*) HP Load Runner (commercial)

H) Client Side Performance Tools:
-Firebug (FF plugin)
-Fiddler (Standalone or plugin)
-DynaTrace (Standalone or plugin)
The most important is to identify if a certain resources (esp outside your domain) is taking much of the time.
Invalid configuration also could be detected , pointing to another env.
JS performance might be a reason for end-user bad performance.

I) Operating system performance Tools:
Monitoring System Utilization Use a program like top (on Solaris), the Windows Performance Monitor, or a more sophisticated tool to keep track of information like:
• CPU utilization
• paging activity
• disk I/O utilization
• network I/O utilization
*If you are getiing Can't create native thread exception , you should know that your main system memory is low , as each native thread reserve around 1 MB of memory for its stack trace.
*You can detect a file descriptor leak in two different ways:
• You may notice a lot of IOExceptions with the message “Too many open files.”
• During load testing, you periodically run a profiling script, such as lsof (on UNIX), and you notice that the list of file descriptors grows continually.

J) DB performance tuning:
- Take DB snapshots and analysis them.
- Take the most consuming sql and do execution plan for them, possibly you could find a missing index.
- You may need a DBA to analyze possible DB issue.
- Enable JDBC logging and retrieve the SQL queries and try to optimize them outside the application (debug level need to be set to 15, you can also retrieve this queries from DB monitor tools)

K) Adjusting the FileCache Size :
ATG’s servlet pipeline includes servlets that are used for JHTML pages, and which use a FileCache component to store files that ATG has read from disk, so that subsequent accesses for those files can be delivered directly from memory instead of being read from disk. Using the FileCache component improves performance by reducing disk accesses. For maximum performance, you want the FileCache to be large enough to hold all the files that ATG serves frequently.
Set the totalSize property of this component at:
to an appropriate value, measured in bytes, such as the following:
# size in bytes (2 million bytes)

One approach in sizing the FileCache is to batch compile the entire document root and set the file cache to the resulting size. Make sure, however, that you account for the size of your FileCache when
you set the size of your JVM. You can preload the FileCache by creating a script that accesses every page on your site and running the script on startup.
You can view statistics on how the file cache is used, as well as the contents of the FileCache in the Dynamo Administration page at

L) Code optimization:
You may scan the code in the high transaction scenarios, with special care to the pipelines and component scopes to identify possible performance issues..
You may use tools for code optimizations.
Some finding may need changes to component scope (the global is best performing being loaded only once, while the request scoped is initialized per each request, so you need to minimize it)
Avoid resolving the components from the code, instead using ATG property files to inject them , in case you do not have that ability like derived properties , you may have static method to get a reference to them (if global components).
Follow the code standards and best practices..
Once possible reason for application bad performance is not following the logging best practices by checking the log level before logging the message.

M) Use Cache Droplet in your JSP pages:
Cache Droplet caches content that changes infrequently used especially if it includes a lot of processing or DB interactions (Component /atg/dynamo/droplet/Cache)
**Required Input Parameters key
Lets you have more than one view of content based on a value that uniquely defines the view of the content. For example, if content is displayed one way for members and another for non-members, you can pass in the value of the member trait as the key parameter.
**Optional Input Parameters
Determines how cached URLs are rendered for future requests. By setting hasNoURLs to false, you specify that subsequent requests for the cached content causes URLs to be rewritten on the fly, assuming URL Rewriting is enabled. A setting of false for hasNoURLs causes URLs to be saved and rendered exactly as they are currently (without session or request IDs) regardless of whether URL rewriting is enabled.

The interval after content is cached until the cached is regenerated. If omitted, the interval is set from the defaultCacheCheckSeconds property in the Cache servlet bean’s properties file.

**Open Parameters output
The code enclosed by the output open parameter is cached.

**Clearing the cache:
You can determine how often data is flushed for a given Cache instance on a JSP or for all instances of Cache. To remove cached content associated with a particular instance of Cache, set the cacheCheckSession input parameter in the Cache instance to the frequency by which associated data should be expired. If you omit this parameter, the Cache.defaultCacheCheckSeconds property is used (default value is 60 seconds) .
The Cache.purgeCacheSeconds property determines how often content cached by any Cache servlet bean is flushed. The default is 21600 seconds (6 hours). Cache purging also occurs when a JSP is removed or recompiled.

N) Hardware limited capacity:
This case for development environment where you may consider moving one application outside this box, like the DB , Merch , ...etc..
Also you can shutdown the Merch if you do not need it up and running all the time.
Another trick -strictly for Dev box- is decreasing session timeout into 5 minutes or decrease the thread reserved memory usage.

O) ATG recommended Check List:
The following checklist can help you identify the most common sources of performance problems:
• Have you properly configured memory for your Java Virtual Machines? Have you set your -Xms and -Xmx arguments the same? Do all ATG heap sizes fall within the limits of physical memory?
• Has one or more servers stopped responding? There could be a number of causes, including a Java deadlock.
• Are you seeing many IOExceptions with the message “Too many open files”? You may have a file descriptor leak.
• At maximum throughput, look at the CPU utilization, database CPU utilization, I/O activity, and paging activity.
• If CPU utilization is low, then you may have an I/O or database bottleneck.
• If CPU utilization is high, then the bottleneck is most likely in the application code. Use a performance profiling tool to try to locate bottlenecks in the code. Review your code to make sure it uses good Java programming practices.
• If paging is occurring, adjust the memory allocated to your Java Virtual Machines.
• Look at the I/O and CPU utilization of the database. If utilization is high, database activity is probably slowing down the application.
• Are you receiving page compilation errors? You may not have enough swap space for page compilation.

Reference: ATG Platform documentation set : Version 9.1 - 7/31/09 and others..
For More information, refer to Java EE 7 performance tuning and optimization book: The book is published by Packt Publishing:

Wednesday, September 14, 2011

ATG Made Easy - part 6

ATG Commerce
2 versions: ATG Consumer Commerce is used for developing standard business-to-consumer (B2C) online stores. ATG Business Commerce is used for sites oriented more toward business-to-business (B2B) uses.

1) Product Catalog & Custom Catalog:
The product catalog is a collection of repository items (categories, products, media, etc.) that provides the organizational framework for your commerce site. ATG Commerce includes a standard catalog implementation, based on the ATG SQL Repository, that you can use or extend as necessary.

The structure is built on Catalog --> Category --> Product --> Sku
Where you can modify the features of each level.
The most important is to have the parent flag in the top categories
Each level has parent/child attributes, eg. parentCategories, childSku's for product level.
You can have a linked template to display the element using it.
Some dropletes are dedicated on loading these elements: ItemLookupDroplet (generic) versus CategoryLookupDroplet, ProductLookupDroplet and SKULookupDroplet.
They take id as input and product the item/elements as output.

MediaLookupDroplet for Media , 2 types exist: Media Internal and External (internally referenced versus URL/File)
The data property is either binary (for media-internal-binary items) or text (for media-internal-text items)

ATG catalogs use several batch and dynamic services to perform catalog updates and to verify catalog relationships. These services are collectively referred to as the Catalog Maintenance System (CMS). The CMS updates property values that enable navigation and hierarchical search. It also verifies the relationships of catalog repository items and properties.
**Batch Services : as
• AncestorGeneratorService
• CatalogVerificationService
• CatalogUpdateService
• CatalogMaintenanceService
Each service creates a global lock at the start of execution, using the global lock management, and releases it upon completion. This prevents services that use the same lock name from executing simultaneously on the same server, or other servers in the cluster.
**Dynamic Services : as
• CatalogChangesListener
• CatalogCompletionService
• StandardCatalogCompletionService

Note that some of these services are available only for custom catalogs; others for both standard and
custom catalogs.

2) Order Management:
A session scoped component which is ShoppingCart (atg/commerce/ShoppingCart) manage the current order (+last order and saved orders), order stored in memory and persisted frequently..
You can configure and customize it using orderrepositiory.xml (where you can define its components, cache size and lock mode).
Once Sku is added to the order-->it become CommerceItem that is related to ShoppingGroup that is related to ShippingAddress and ShippingMethod.
Once order is submitted , it become in incomplete status ---> further processing --> fulfilled.
In case of B2B, order is hold for the needed approvals.
Once good feature exist in ATG Commerce, is the ability to have scheduled order.

Payment methods include: Credit card, Store credit, gift certificate, invoice request (in B2B)

CartModifierFormHandler: Request scope
A form handler to handle adding items to order, remove items, change quantity, move the order forward (payment, shipping info),

Handles continue shopping, expressCheckout, update, ....etc...

*Business Layer:
Create, add to order, remove from order..
Once sku is added to the order : sku id--> catalog ref Id in CommerceItem object, quantity , priceInfo , paymentGroup and shippingGroup are taken as well.

load/save, create , allocate order to payment group.
These operations are handled by pipelines as mentioned before when we discuss the pipelines.
Provides low level raw operations..
Centralized functionality for purchase operations.

*Best Practise for Order Updating:
1.Acquire lock-manager write lock on profile id from the /atg/commerce/order/LocalLockManager
2.Begin Transaction
3.Synchronize on the Order object.
4.Modify Order
5.Call ((OrderImpl) pOrder).updateVersion();
6.Call OrderManager.updateOrder()
7.Release Order synchronization
8.End Transaction
9.Release lock-manager write lock on profile id from the /atg/commerce/order/LocalLockManager

3) Purchasing and Fulfillment Services :
ATG Commerce provides tools to handle pre-checkout order-processing tasks such as adding items to a shopping cart, ensuring items are shipped by the customer’s preferred method, and validating credit card information. The system is designed for flexibility and easy customization; you can create sites that support multiple shopping carts for a single user, multiple payment methods and shipping addresses.
You can integrate with third-party authorization and settlement tools such as Payflow Pro, CyberSource, and TAXWARE.

4) Inventory Management :
The inventory framework facilitates inventory querying and inventory management for your site.
The InventoryManager is a public interface that contains all of the Inventory system functionality. Each method described below returns an integer status code. All successful return codes should be greater than or equal to zero, all failure codes should be less than zero. By default, the codes are:
INVENTORY_STATUS_SUCCEED=0 There was no problem performing the operation.
INVENTORY_STATUS_FAIL=-1 There was an unknown/generic problem performing the operation.
INVENTORY_STATUS_INSUFFICIENT_SUPPLY = -2 The operation couldn’t be completed because there were not enough of the item in the inventory system.
INVENTORY_STATUS_ITEM_NOT_FOUND = -3 The operation could not be completed
because a specified item could not be found in the inventory system.

ATG Commerce includes the following implementations of the InventoryManager out of the box.
• AbstractInventoryManagerImpl
• NoInventoryManager
• RepositoryInventoryManager
• CachingInventoryManager
• LocalizingInventoryManager

Preventing Inventory Deadlocks
InventoryManager includes the acquireInventoryLocks and releaseInventoryLocks methods.
acquireInventoryLocks acquires locks for the inventory items that apply to the given IDs.
releaseInventoryLocks releases locks for the inventory items that apply to the given IDs.

5) Pricing Services :
ATG Commerce pricing services revolve around pricing engines and pricing calculators.
**The pricing engine determines the correct pricing model for an order, individual item, shipping charge, or tax, based on a customer’s profile.
**The pricing calculator performs the actual price calculation based on information
from the pricing engine.

Pricing engines are responsible for three tasks:
• Retrieving any promotions that are available to the site visitor.
• Determining which calculators generate the price.
• Invoking the calculators in the correct order.

Pricing calculators are responsible for the following:
• Looking up the price in the catalog by priceList.
• Invoking a qualifier service that identifies the objects to discount.
• Using information they receive from the engines and from the qualifier service to
perform the actual process of determining prices.

By default, ATG Commerce can perform dynamic pricing for the following types of pricing object:
• Items. Each item has a list price that can be specified in the listPrice property of the Product Catalog repository.(Note that an “item” is a CommerceItem, which represents a quantity of a SKU or a product).
• Orders.
• Shipping price.
• Tax.

-Qualifier : This is a service that interprets a PMDL rule and decides what, if
anything, may be discounted. The term qualifier also refers to the first part of a PMDL rule. This rule defines when something can receive a discount.
- Target : The second part of a PMDL rule is called the target. This rule
defines which part of an order receives a discount.

4 Types of PriceInfo objects:
OrderPriceInfo,ItemPriceInfo, ShippingPriceInfo, and TaxPriceInfo

How this works?
1-Invokation to price engine from business-layer logic, such as a PriceItem servlet
bean in a page or from the ATG Commerce PricingTools class.
2-Pricing engine applies its configured precalculators. A precalculator modifies a
price without using any associated promotions.
3-Pricing engine accesses the current customer’s profile and retrieves any
promotions listed in the activePromotions property of the profile
4-Pricing engine builds a list of global promotions and concatenate them.
5-Pricing engine applies promotions by priority (each promotion has pricingCalculatorService property that specifies the calculator that the system must use to apply it)
6-The pricing engine applies its configured PostCalculators.
7-The pricing engine modifies the PriceInfo object of the object being discounted.

-When building a product catalog, you must decide whether your site requires dynamic product pricing and, if so, how granular you need it to be. Using dynamic pricing on a product page can cause a significant decrease in performance compared to using static pricing..

-With static pricing, each item in the catalog has a list price stored in the listPrice property of the catalog repository.
Volume Pricing: Bulk 100 at 10, 200 at 9, ...etc..
or Tiered: 1st 100->10 , 2nd 100 -->9 , ...
Static uses 2 droplets: PriceDroplet and PriceRangeDroplet.

atg.commerce.pricing.PricingEngine is the main interface for interacting with the
atg.commerce.pricing package. Extensions of this interface describe objects that calculate a price for a specific class of object. For example, OrderPricingEngine extends PricingEngine and calculates prices for orders passed to it.

All PricingEngine implementations process promotions. The PricingEngine interface itself contains only one method, getPricingModels, which extracts a collection of promotions from an input profile. (Price Model can be customized by priceModel.xml).
The PricingEngine interface itself does not describe any functionality other than the promotion extraction API because of the wide range of information that different PricingEngine implementations might require to calculate a price for their specific class of object. For example, the ItemPricingEngine implementation needs one set of input parameters, while the OrderPricingEngine needs a different set.

The individual extensions of the PricingEngine interface contain the API methods for generating a given type of price. There is a Java object type for each type of price that is generated. For example,atg.commerce.pricing.OrderPricingEngine inherits the promotion extraction API from PricingEngine and defines one new method, priceOrder, to generate a price for an order in a given context.

ATG Commerce provides the following four extensions of the main PricingEngine interface:
• atg.commerce.pricing.ItemPricingEngine
Provides a price for atg.commerce.order.CommerceItem objects.
• atg.commerce.pricing.OrderPricingEngine
Provides a price for atg.commerce.order.Order objects.
• atg.commerce.pricing.ShippingPricingEngine
Provides a price for atg.commerce.order.ShippingGroup objects.
• atg.commerce.pricing.TaxPricingEngine
Determines tax for atg.commerce.order.Order objects.

PricingTools Class
The atg.commerce.pricing.PricingTools class performs a variety of pricing functions for different types of pricing engines. It also has a number of static, currency-related methods for use by all pricing engines.
The PricingTools class is the main way that business-layer logic interacts with the pricing engines and the other classes in the atg.commerce.pricing package.

The properties of PricingTools are as follows:
• itemPricingEngine: The pricing engine that calculates prices for items, both
individually and in groups. An item is identified as a set quantity of a SKU or product.
• orderPricingEngine: The pricing engine that calculates prices for orders. Typically,
the price is just the sum of the prices of the items in the order. However, the order
might be discounted separately from the constituent items.
• shippingPricingEngine: The pricing engine that calculates prices for shipping
groups. An order contains one or more shipping groups when the contents of the
order require shipping for delivery. An order has no shipping groups when it is
delivered online, and shipping is therefore not calculated for this type of order.
• taxPricingEngine: The pricing engine that calculates tax for orders. Tax is calculated
on the order total.
• roundingDecimalPlaces: Specifies the number of decimal places to which the an
input price is rounded . This property is used by the round, roundDown and
needsRounding methods.

Important Methods:
priceEachItem, priceItem, priceItemsForOrderTotal, priceOrderForOrderTotal, priceOrderTotal, priceShippingForOrderTotal, priceTaxForOrderTotal, needsRounding, round.

The Pricing Servlet Beans : You can insert on site pages as required and use to
perform dynamic pricing.
• AvailableShippingMethods Servlet Bean
• ItemPricingDroplet Servlet Bean
• PriceEachItem Servlet Bean
• PriceItem Servlet Bean
• PriceDroplet Servlet Bean
• ComplexPriceDroplet Servlet Bean

*Price Lists:
Price Lists allow you to target a specific set of prices to a specific group of customers.
The PriceListManager class maintains the price lists. A price may be retrieved from the PriceListManager from a given price list by product, by SKU, or by a product/SKU pair.
The most important method in PriceListManager is getPrice. This is used during pricing of an order to get the correct price for a given product/SKU pair.

It is configured using priceList.xml

6) Targeted Promotions :
Business managers can use ATG Commerce promotions to highlight products and offer discounts as a way of encouraging customers to make purchases.

Promotions 3 types: ItemDiscount, OrderDiscount and ShippingDiscount.

Pricing Model Description Language (PMDL): rule describing the conditions under which this promotion should take effect. The rule is created in the ATG Control Center using the interface designed for promotion creation.

7) Order Management Web Services :
All order management web services are included in commerceWebServices.ear in the
orderManagement.war web application.

Reference: ATG Platform documentation set : Version 9.1 - 7/31/09

Tuesday, September 13, 2011

ATG Made Easy - part 5

ATG Personalization Features:

One of the power features of ATG is the personalization module, here is a quick list of the available features:

1) Internal and External User Profiles:

The default external user profile repository is /atg/userprofiling/ProfileAdapterRepository,
which is defined by the userProfile.xml file located in <ATG9dir>\DPS\config\profile.jar. Each
ATG application that adds properties to the external user profile stores its userProfile.xml file in an
ExternalUsers sub-module.
Internal profiles are stored in the /atg/userprofiling/InternalProfileRepository, defined by the
internalUserProfile.xml file in <ATG9dir>\DPS\InternalUsers\config\config.jar.
A parallel set of database tables also exists for internal user profiles. Where the user item in the
ProfileAdapterRepository references the dps_user table, the user item in the
InternalProfileRepository points to a dpi_user table, and so on.

You can extend/replace and customize the userProfile.xml using the same way as any repository.

In ATG, you can configure the profile to use SQL or LDAP or both.
By changing : /atg/userprofiling/ProfileTools into :

The LDAP is based on view, object and classes:
<!-- user view -->
<view name="user" default="true">
<!-- object classes -->
<!-- properties -->
<property name="login" ldap-name="uid" data-type="string" required="true">
<attribute name="unique" value="true"/>
<property name="password" ldap-name="userpassword" data-type="string"

Whenever a user accesses a site that uses the Personalization module, two different mechanisms are used
to track the user’s actions:
• A session is created for the user, and is maintained either through a cookie or through
URL rewriting.
• The user is associated with a profile.
To change the secret key that the Personalization module uses to hash the user ID cookie, edit the
following property of /atg/userprofiling/CookieManager:

Profile Form Handlers:

-handleCreate: Creates a new permanent profile and sets the profile attributes to the values entered in the form.
-handleUpdate Modifies the attributes of the current profile.
-handleLogin Uses the login and password values entered by the user to associate the correct profile with that user.
-handleChangePassword Changes the password attribute of the profile to the new value entered by the user.
-handleLogout Resets the profile to a new anonymous profile and optionally expires the current session.

Access Control
Control access to some or all of the pages on the site.
AccessControlServlet (/atg/userprofiling/AccessControlServlet):

# Nucleus path of the Profile object
# List of mappings between paths and AccessController objects. If a
# path refers to a directory, all the documents in that directory and
# its subdirectories will be protected by the given AccessController.
# List of "access allowed" event listeners
# accessAllowedListeners=
# List of "access denied" event listeners
# accessDeniedListeners=
# The URL to redirect to if access is denied. If the AccessController
# supplies its own deniedAccessURL, it will overwrite this value.

This implementation of AccessController performs access control based on a set of rules, specified via the service’s ruleSetService property. For example, suppose there is a RuleSetService named
FemaleRuleSetService, configured with the following rule set:
<rule op=eq>
<valueof target="Gender">
<valueof constant="female">
Set the ruleSetService property of the Access Controller to point to
FemaleMembersRuleSetService. The user will be allowed access only if she is in the Female profile
group. Here is the example configuration:
# Rules used to determine whether access should be allowed
# URL to redirect to if access is denied

2) Targeting Content:

Creating Rules for Targeting Content:
<rule op=eq name="Rubber sector">
<valueof target="industrySector">
<valueof constant="rubber">
Accept Rules
Reject Rules
Sorting directives
<rule ...> ... </rule>
<rule ...> ... </rule>
<ruleset src=...> ... </ruleset>
<sortbyvalue ...>

Setting Up Targeting Services:
To set up a RuleSetService for your rule set, create an atg.targeting.RuleSetService component.
This component can reference a rules file, or it can itself include your targeting rules as a property

So either
-rulesFilePath:If your Rule Set Service refers to a rules file, set this property to the file path of the rules file. This path can
be an absolute path or a relative path starting from your <ATG9dir>/home directory.
-ruleSet=xml rules structure

** Targeter Example: calling slot from targeter

<dsp:droplet name="/atg/targeting/TargetingFirst">
<dsp:param name="targeter" bean="/atg/registry/slots/aricleSlot"/>
<dsp:param name="howMany" value="1"/>
<dsp:oparam name="output">
<dsp:a href="articleDetails.jsp">
<dsp:param name="itemId" param=""/>
.....other retrived parameters ....

Targeted E-mail:
You can use the Targeted E-mail services included with the Personalization module to compose and deliver e-mail using the same profile groups and targeting rules you use to deliver content on your Web site.

You create targeted e-mail using the: class.
This class draws the message body of an e-mail from a page template, invokes a MessageContentProcessor component to process the content, and then passes the resulting JavaMail object to the TemplateEmailSender component, which sends the message. The properties of a TemplateEmailInfoImpl object store values that are specific to an individual e-mail campaign, so you should create a separate instance of this class for each campaign.
*key properties of the TemplateEmailInfoImpl class:
contentProcessor:MessageContentProcessor responsible for processing the message content; Default: /atg/userprofiling/email/HtmlContentProcessor another valid value is SimpleContentProcessor

The HtmlContentProcessor can be further configured according to the needed formatting.

3) Scenario Module:
Scenarios are event-based i.e. what does user/system do? but targeters are knowledge-based i.e. what does the user info/snapshot contain.

The configuration file scenarioManager.xml is the place where information common to all scenario servers is specified. This file uses the Process Manager DTD, located in
<ATG9dir>\DSS\lib\classes.jar. The scenarioManager.xml file
A cluster of ATG servers must always contain the following:
• exactly one process editor server
• zero or more global scenario servers
• zero or more individual scenario servers

The main Nucleus component responsible for scenario operations is located at /atg/scenario/ScenarioManager. To examine the scenarios handled by this service, point your Web browser to the ATG Dynamo Server Admin page at:

You can defining Access Control for a Scenario as well..
You can bound the scenario to events like :
Collective events:
• InboundEmail Event
• Shutdown Event
• Startup Event
• GSAInvalidation Event
Individual events:
• Login Event
• Logout Event
• Register Event
• AdminRegister Event
• StartSession Event
• EndSession Event

The scenario is componsed of actions , Scenario actions are implementations of the atg.process.action.Action interface or direct extends ActionImpl class.
The main methods are:
-initialize (map of parameters)
-configure (Config object) , you can cast to your customized configuration object according to the action definition xml; the config object extends Generic Service and contains getters and setters for all configuration elements.
-executeAction(ProcessExecutionContext context)
Where you can have access to request, response, user, events, parameters, properties set by admin for all users ...etc...

Configuration file : ScenarionManager.xml

Yes The logical name of the action as passed to an
action handler.
Yes A Java class that is an implementation of the atg.process.action.Action interface.
No The Nucleus path of the action’s configuration file.
Most of other elements are optional only action name and class are required.

Default scenario actions:
• Modify Action
• Set Random Action
• Redirect Action
• FillSlot Action
• EmptySlot Action
• Disable Scenario Action
• Record Event Action
• Record Audit Trail Action
• Filter Slot Contents Action
• Add Marker To Profile Action
• Remove All Markers From Profile Action
• Remove Markers From Profile Action
• Add Stage Reached Action
• Remove Stage Reached Action
• E-mail-Related Actions: EmailNotify and SendEmail

4) Using Slots :

Slots are containers that you can use to display and manage dynamic items on your Web site. You use targeting servlet beans to include slots in site pages, and you use scenarios to fill them with content.
Slots are components of class atg.scenario.targeting.RepositoryItemSlot or
atg.scenario.targeting.Slot. A slot component must have a Nucleus address in the folder
You can create slot components in two ways:
• By manually creating a .properties file
• Through the slot wizard in the ACC

The following is an example of a .properties file for a slot component of class
$description=displays fund news to brokers

Most important are:
repository name = source of content, itemDescriptorName also.
The Event Generation option corresponds to generation (property of type int) in the slot component .properties file. For Never, specify 1. For When Empty, specify 0.
Item retrival: 0=static ; 1=rotating ; 2=destructive (displayed only once then removed)
ordering : 0= shuffle, 1=random

A lot of other features still exist in this module.

Reference: ATG Platform documentation set : Version 9.1 - 7/31/09

ATG Made Easy - part 4

11) ATG Search:
The form handler class atg.repository.servlet.SearchFormHandler.
This handler uses the search criteria to return items from one or more repositories, and one or more item types from a given repository. SearchFormHandler is a subclass of atg.repository.servlet.GenericFormHandle,so it inherits properties and handler methods common to all form handlers.

*Supports the following search methods:
• Keyword search
• Text search
• Hierarchical search
• Advanced search
• Combination search

Important properties:
==>Search proceeds or throw an error when no search criterion is specified. The default value is true.
==>Include previously entered criteria. default value is false.
==>The items located by a search. default value is false.
==>URL that opens when a search fails. Set this property to the URL of a page that describes the error and provides tools for starting another search.
==> URL that opens when a search operation succeeds. This property should point to the URL for the first search results page.
==> must be defined for the repository specified in the repositories property.
==>Tracks whether a query that uses this SearchFormHandler occurred during the current session. When a search begins, this property is set to true.
==> A comma-delimited list of repositories to include in the search. Specify the full path of each repository to the component.

A) KeyWord Search:
• doKeywordSearch is set to true to enable keyword searching
• keywordSearchPropertyNames specifies one or more single or multi-valued
properties to include in searches. If this property is empty, the form handler searches
all string properties of each repository item, except enumerated or non-queryable

Logical Operators:
• NOT / !
• AND / &
• OR / |

B) Text Search:
• doTextSearch is set to true to enable text searching.
• textSearchPropertyNames specifies which properties to query.

-allowWildcards (default true , to use *)
-searchStringFormat :
Format that the text should follow.
Each repository component uses this property to specify the text format available to the
database. Available options include:
- ORACLE_CONTEXT: Oracle ConText
- MSSQL_TSQ: Microsoft SQL Server
- SYBASE_SDS: Sybase SpecialtyDataStore
- DBS_TEXT_EXT: IBM DB2 with the Text Extender package

C) Hierarchical Search:
A hierarchical search returns all parent and descendant items of the specified item. A hierarchical search looks in a family of items, starting from a given item level and extending to that item’s descendants.
Each item type must have a multi-valued property whose value is a complete list of its ancestor IDs.
Hierarchical searching restricts the search results to items whose ancestor items include the item specified in the ancestorId property.

• doHierarchicalSearch Set to true.
• ancestorPropertyName Name that represents an inheritance scheme or lineage.
• ancestorId Repository ID that represents an inheritance scheme or lineage.
This property is set to the repository ID of the item whose descendants you want to search, typically obtained through a hidden input tag, or supplied by the form user through an
input field.

D) Advanced Search:
• doAdvancedSearch If set to true.
• advancedSearchPropertyNames List of properties that are searched, where each property is
specified in this format:
• advancedSearchPropertyRanges Property range used to narrow search results by integer
values. The specified range is inclusive and uses this format:
You can leave the search range open at either end by
leaving the maximum or minimum value undefined.
• advancedSearchPropertyValues List of property values used in the search query. The format
should appear as follows:
• clearQueryURL The URL of the page to display when the handleClearQuery() method is invoked. The specified page should have search tools. If this property is empty,
• handleClearQuery() returns control to the current page.
• displayName The name displayed for an item type when it is used as search criteria.
• propertyValuesByType Holds the properties in advancedSearchPropertyNames
and their values in a HashMap key/value relationship. This only applies to those properties in
advancedSearchPropertyNames that are enumerated, RepositoryItems, or collection of RepositoryItems.
When the key is the name of a property, the value is a collection of possible values.
When the key is a repositoryId, the value is a collection of values for a specific property.
When the key is a collection of repositoryIds, the value is repositoryIds.
The format should appear as follows:

** Search Results Properties:
SearchFormHandler defines the following search results properties:
-currentResultPageNum : page number of active page. Set this property in order to display a given page. The default setting is 1.
-enableCountQuery Enables access to other query result properties.
-endCount : number of the last item on the current page.
-endIndex : based index number of the last item on the current page. The form handler uses this property to calculate the endCount property.
-maxResultsPerPage : Maximum number of items that display on a page.
-maxRowCount : Maximum number of items returned from a search. A value of –1 (the
default) returns all items that satisfy the search criteria.
-resultPageCount : Number of pages that hold search results.
-resultSetSize : Number of items returned from the search.
-startCount : number of the first item on the current page.
-startIndex : index number of the first item on the current page. The form handler uses this property to calculate the startCount property.

You can use these properties to construct the search results page.

12) DSP Tag List:
Supports passing values to component properties and page parameters on click-through. Also handles URL rewriting.
Starts a transaction.
Completes a transaction.
Determines whether a Collection contains a specific single valued item.
Determines whether a Collection contains an EJB.
Manages a transaction by starting the transaction, checking for errors, rolling back the transaction when errors are found and committing it when they are not.
Invokes an ATG Servlet Bean.
Determines whether two EJBs have the same primary keys.
-dsp:form Encloses a form that can send DSP form events.
-dsp:frame Embeds a page by encoding the frame src URL.
Introduces a page parameter or component property value as an element in a JSP.
-dsp:go Encloses a form that can send WML form events.
-dsp:iframe Embeds a dynamic page, by encoding the frame src URL.
-dsp:img Inserts an image.
Imports a Nucleus component into a page so it can be referred to without using its entire pathname. Also, creates a reference to a Nucleus component in an attribute visible to EL expressions.
Embeds a page into another page.
Passes values to a component property on submission.
References a page, such as a stylesheet, by encoding the link src URLs.
Specifies content to be rendered by an enclosing
-dsp:droplet tag.
Specifies content to be rendered by an enclosing
-dsp:select tag.
Provides the sorting pattern to parent tag dsp:sort.
Enables ATG page processing functionality.
Stores a value in a parameter.
Passes values to a component property on submission (WML).
Sets a component property from dsp:a tag.
Causes any actions in the current transaction to be returned to their pretransaction state.
Passes values to a component property on submission.
Specifies that, when a transaction is prompted to end, it ends in a rollback action.
Sets the value of a component property or a page parameter to a specified value.
-dsp:setxml Sets attribute display format to XML or HTML.
Organizes the contents of a Container or array based on a sorting pattern.
Makes an object’s descriptive information available so other tags can find out its size, data type, and so on.
Passes values to a component property on submission.
Introduces a page parameter, standard JavaBean or Dynamic Bean component, or constant value as an element in a JSP that other tags can render using EL.
Reads the current transaction’s status.
Retrieves and displays the value of a page parameter,component property, or constant value.

Reference: ATG Platform documentation set : Version 9.1 - 7/31/09

Monday, September 12, 2011

ATG Made Easy - part 3

9) JMS & ATG:

The Dynamo Application Framework includes a number of JMS-related tools, which are known collectively as the Dynamo Messaging System (DMS). The main parts of DMS are:

a) Two JMS providers, Local JMS and SQL JMS. Local JMS is built for high-speed lowlatency
synchronous messaging within a single process. SQL JMS is more robust, and
uses an SQL database to handle communication between components within the
same Dynamo application, or components running in different processes.

b) Patch Bay is an API and configuration system layered on top of JMS. Patch Bay is
designed to ease the development of messaging applications in Dynamo. The Patch
Bay API allows Nucleus components to send and receive messages. The configuration
system uses an XML file to specify how these components should be connected. This
file allows developers to change or add connections between components without
changing code.

**) Local JMS:

Local JMS does no queuing. When a message is sent, Local JMS immediately finds out who the receivers are and calls the appropriate methods on the receivers to deliver the message, waiting for each receiver to process the message before delivering the message to the next receiver. Only when the message has been delivered to all receivers does control return to the sender. In this way, Local JMS works more like Java Bean events than like typical JMS implementations; when a Java Bean fires an event, it actually calls a method on several registered listeners.
Local JMS is also non-durable; all messages are non-persistent. If a message is sent to a queue destination that has no listeners, the message disappears. Also, durable subscriptions to topic destinations act exactly like non-durable subscriptions—if a subscriber is not listening to a topic, it misses any messages sent to that topic whether it is subscribed durably or not.
Local JMS is most often used to pass data around to various components within a single request.

Asynchronous messaging and message persistence, it uses an SQL database for persistence of messages.
This ensures that messages are not lost in the event of system failure, and enables support for persistent queues and durable subscriptions, as described in Message Persistence.

c) Patch Bay:
Patch Bay is designed to simplify the process of creating JMS applications. Patch Bay includes a simplified API for creating Nucleus components that send and receive messages, and a configuration file where you declare these components and your JMS destinations. When a Nucleus-based application starts up, it examines this file and automatically creates the destinations and initializes the messaging components.
This means your code does not need to handle most of the JMS initialization tasks, such as obtaining a ConnectionFactory, obtaining a JMS Connection, and creating a JMS Session

Patch Bay is represented in Nucleus as the component /atg/dynamo/messaging/MessagingManager, which is of class atg.dms.patchbay.PatchBayManager

MessagingManager uses an XML file called the DMS configuration file to configure the individual parts of the Patch Pay system, such as JMS providers, message sources and sinks, and destinations. The definitionFile property of the MessagingManager component names the DMS configuration file.

Components: (all must be global scoped)
*Message source: A component that can send messages. A message source must implement the atg.dms.patchbay.MessageSource interface.
==>create and send messages
* Message sink: a component that can receive messages. A message sink must implement the atg.dms.patchbay.MessageSink interface.
==> This interface defines a single method, receiveMessage, which is called to notify the message sink that a message is being delivered.
* Message filter: a component that implements both interfaces, and can send and receive messages.
==>Message filters must implement both the MessageSource and the MessageSink interface. A message filter typically implements receiveMessage by manipulating the message in some way, then sending a new message.
* In addition to your sources and sinks, you must also define standard JMS destinations;

Patch Bay is represented in Nucleus as the component /atg/dynamo/messaging/MessagingManager.
The definitionFile property of the component MessagingManager names the XML file that
configures Patch Bay. The value of this property is:

Example for defining the 3 elements:
<?xml version="1.0" ?>

You can use SQL JMS as your JMS provider for your own applications. However, if you are running the ATG platform on IBM WebSphere Application Server or Oracle WebLogic Server, you might prefer to use your application server’s JMS provider.

NB: Use topic if you have multiple subscribers each provide different functionality and you want them all to list to it, and use Queue if you have only 1 functionality and you need only 1 type of listeners that process the messages only once.

10) Search Engine Optimization:
Search Engine Optimization (SEO) is a term used to describe a variety of techniques for making pages more accessible to web spiders (also known as web crawlers or robots), the scripts used by Internet search engines to crawl the Web to gather pages for indexing. The goal of SEO is to increase the ranking of the indexed pages in search results.

Available Options:
-URL Recoding
-Canonical URLs
-SEO Tagging

*Jumping Servlet:
The atg.repository.seo.JumpServlet class is responsible for translating static request URLs to their dynamic equivalents. This class extends the atg.servlet.pipeline.InsertableServletImpl class, so it can be inserted in the DAS or DAF servlet pipeline. However, because this servlet is intended to process only static URLs, and incoming URLs are typically dynamic, including the servlet in a pipeline may be very inefficient. Therefore, it is generally preferable to configure it as a URI-mapped servlet in the web.xml file of your application, to ensure that it processes only static URLs.

To configure the jump servlet in a web.xml file, you actually declare another class,
atg.repository.seo.MappedJumpServlet. This is a helper class that invokes the JumpServlet
component. In addition, you declare a servlet mapping for the pattern that the servlet uses to detect static request URLs.
For example, if you have configured your static URLs to include /jump/ immediately after the context root, the entry in the web.xml file would be something like this:

There also are several properties you can configure for the Nucleus component:
- templates
An array of IndirectUrlTemplate components that the servlet examines in the order specified until it finds one that matches the static request URL.
- defaultRepository
Specifies the repository to associate with repository items for which a repository is not
otherwise specified.
- defaultWebApp
Specifies the default web application to use when determining the context path for a URL.

In addition, the servlet has nextServlet and insertAfterServlet properties for including the
component in a servlet pipeline. If the servlet is configured through the web.xml file, you should not set these properties.

The templates point to a global component property files for the component: IndirectUrlTemplate or DirectUrlTemplate (atg.repository.seo.InDirectUrlTemplate and atg.repository.seo.DirectUrlTemplate) where you can specify many properties including:

regexElementList=parentAlias | string,\
skuItem | id | /atg/commerce/catalog/ProductCatalog:skuLookup,sTab | string

Then the forwarded URL by the jump servlet like:


Another example: suppose you have a URL format that looks like this:


The regular expression pattern for this format might be specified like this:

Reference: ATG Platform documentation set : Version 9.1 - 7/31/09

ATG Made Easy - part 2

4) Form Handlers:
*Built-in Form handlers or Cutom form handlers extends atg.droplet.GenericFormHandler
*Function is to handle the html input , validation , apply business logic, handle errors and redirect to other pages..
-RepositoryFromHandler: CRUD operations of the repositories.
It has a dynamic bean property mapping and access them using notations <dsp:valueOf bean=".....formHandler.value.xxxx"
Can be used as well in input/textarea as value="..." or paramValue="..." or beanValue="...."
You can use converters as well.

* Submit Buttons:
<dsp:input bean="........update/create/delete/cancel" type="submit" value="caption"/>
The real method name need to be handlexxxx.

* Page flow: using (Create/update.delete) successURL or errorURL and cancelURL
You can put it as a hidden field in the form.
You can also use the sendReditect(String url,request) //outside ATG app. or sendLocalRedirect(String url,request) //when you direct inside ATG app.

* But in case of page re-direction you need to return false in the handler method to prevent the normal page flow using successURL,...etc.

* Errors : formError(boolean ...) called if error happen.
formExceptions --> Vector of all errors.
propertyException --> dictionary map between (rep. item property & exception)

*To add errors : addFormException(new DropletException("Invalid Item to add"));

Scopes: Request (for each request) , Session (once per session)

5) Profile: "atg.userprofiling.Profile"
-Session scoped, configured in userProfile.xml in /atg/userProfiling
You may extend it and customize it using the xml property xml-combine="remove/append/replace"
This is for external users , for internal users you may use InternalProfileRepository.

*ProfileFormHandler: nucleous component that facilitate login, logout, register, update, and changePassword of the users.
It has the same features of form handlers : successURL/errorURL for all these actions , also handlexxx method of each of these actions where you can use to customize the application.

6) Servlet Pipelines :
Chain of processors configured by XML..
Treating the request-handling process as a pipeline of independent elements allows request-handling to be treated in a component-oriented manner.

PipeLineManager call them by invoke runProcessor() the return code decide the next pipeline (implement pipeline processor)
It uses JTA, if one processor choose to rollback all transaction is rolledback.

Dynamo uses two request-handling pipelines:
• DAS servlet pipeline for JHTML requests
• DAF servlet pipeline for JSP requests

This difference because of Because JHTML is a proprietary language, it relies on the page compiler provided in the DAS servlet pipeline to generate JHTML into a servlet that is rendered as HTML by the application server.

The last servlet in the pipeline is TailPipelineServlet. It is responsible for calling FilterChain.doFilter(), which invokes the next filter defined in web.xml. The web application, unless it uses ATG Portal, does not include other servlet filters by default.

You can construct pipelines used by your own applications, or you can customize the existing pipelines in Dynamo by adding servlets. Components in a servlet pipeline should be globally scoped.

The heart of the servlet pipeline is the PipelineableServlet interface. All servlets in a pipeline must implement this interface. Servlets that implement PipelineableServlet have a nextServlet property that points to the next component to invoke. This is the primary pipelining mechanism used by the standard request-handling pipelines. Dynamo also provides an implementation class PipelineableServletImpl that implements this interface; your own classes can implement PipelineableServlet by subclassing PipelineableServletImpl.

The PipelineableServlet interface has two sub-interfaces that provide additional mechanisms for determining the next component to invoke: InsertableServlet and DispatcherPipelineableServlet. Servlets that implement the InsertableServlet interface have an insertAfterServlet property that enables the servlet to insert itself in a specific spot in the pipeline. The key advantage of this mechanism is that it does not require modifying any existing servlets in the pipeline.

To add an InsertableServlet to the servlet pipeline:
1. Write your servlet, extending InsertableServletImpl.
2. Define it as a component in the Nucleus hierarchy. It does not really matter where you put your servlet in the component hierarchy (though the location affects references to other components if you use relative pathnames).
3. Set the insertAfterServlet property of your servlet to point to the path of the pipeline servlet you want your servlet to follow. For example, if you want your servlet to follow the DynamoServlet in the pipeline, use:
4. Add the path to your servlet to the initialServices property of /atg/dynamo/servlet/Initial:
Note: Subclasses of InsertableServletImpl that need to add their own logic to doStartService should be sure to call super.doStartService() at some point.
When your servlet finishes processing, it passes the request and response objects to the next Servlet in the pipeline. InsertableServletImpl defines a method called passRequest, which automatically passes the request and response objects to the next servlet in the pipeline.

If the servlet does not call the passRequest() method, no thing return to the browser..

Commonly used pipeline: updateOrder/loadOrder/refreshOrder/processOrder in OrderManager

Configured in /atg/commerce/commercePipeline.xml
where you can add your processors to save/load/update special data during such operations..

<pipeline manager>
<pipeline chain name="processOrder">
<pipeline link name="...." transaction"TX..." xml-combine="replace"
<processor jndi=/atg/...../>
<transition returnValue="1" link="...."/>
</pipeline link>
</pipeline chain>
</pipeline manager>

This diagram shows a typical path a request takes through the DAF servlet pipeline
when ATG Content Administration is running.

To run a pipeline from the code:
HashMap map = new HashMap(13);
map.put("OrderManager", this);
map.put("CatalogTools", getOrderTools().getCatalogTools());
map.put("OrderId", pOrderId);
map.put("OrderRepository", getOrderTools().getOrderRepository());
map.put("LoadOrderPriceInfo", Boolean.FALSE);
map.put("LoadTaxPriceInfo", Boolean.FALSE);
map.put("LoadItemPriceInfo", Boolean.FALSE);
map.put("LoadShippingPriceInfo", Boolean.FALSE);

PipelineResult result;
result = getPipelineManager().runProcess("loadOrder", map);
catch(RunProcessException e)
throw new CommerceException(e);

*Pipeline registry is loaded during the ATG server start.

If you need to resolve something in the request:
Request.resolveName("/osa/ora/... full component name");

7) Initials Services:
If you want to load a service during start up of the server , you have many options:
-Add servlet/jsp to web.xml and specify load-on-startup --> 1 or small number like any JEE application.
-Add class that extends GenericService to

This property file let you to load them while the server is starting up...
One advantage is that you can also register a scheduler that extends this genericService that implements Schedulable (or do the same using the code).

The most important method in generic service is
public void doStartService() throws ServiceException
Where you can have your code in that method.

It is to be noted that in different layers you can add/remove from initial services:


8) Scheduler Tasks:

When the Scheduler executes a job, it calls performScheduledTask on the object that performs the task, which must implement atg.service.scheduler.Schedulable. Typically, the component that schedules the task is also the Schedulable component that executes it, but this is not strictly required.
When a component schedules a task, it must provide the Scheduler with the following information:
• A name for the scheduled job; used only for display to the administrator.
• The name of the component scheduling the job; used only for display to the administrator.
• The Schedulable object that handles the job; typically, the same as the component that schedules the job.
• A flag that indicates how to run the job:
**In a separate thread.
**In the same thread as other scheduled services.
**In a dedicated, reusable thread.

All of this information is encapsulated in a ScheduledJob object, which is passed to the Scheduler’s addScheduledJob() method.
When a job is added to the Scheduler, the Scheduler returns an integer job ID, which you can later use to reference that job. For example, to stop a scheduled job, you can call removeScheduledJob on the Scheduler, passing in the ID of the job to stop.

- 3 types of scheduler: The different types of Schedules can also be created programmatically by creating instances of RelativeSchedule, PeriodicSchedule, or CalendarSchedule.

**PeriodicSchedule specifies a task that occurs at regular intervals, in this format:
schedule=every integer time-unit[ with catch up]
e.g. schedule=every 30 minutes
**RelativeSchedule specifies a time relative to the current time, in this format:
schedule=in integer time-unit
e.g. schedule=in 30 seconds
**CalendarSchedule schedules a task to run at designated points on the calendar. For example, you might schedule a task for 2:30am each Sunday, or a specific date such as January 1. The format for a CalendarSchedule looks like this:
schedule=calendar mos dates wkdays mo-occurs hrs mins
e.g. calendar * 1,15 . * 14 5

Information about all tasks being run by the Scheduler is available through the Component Browser. If you go to the page for the /nucleus/atg/dynamo/service/Scheduler component, you see a list of all tasks monitored by the Scheduler.

The SingletonSchedulableService, a subclass of the standard SchedulableService, works in conjunction with a client lock manager to guarantee that only one instance of a scheduled service is running at any given time throughout a cluster of cooperating Dynamo servers. This provides the foundation for an architecture where scheduled services can be configured on multiple Dynamo servers to provide fault tolerance, but the system can still ensure that each task is performed at most once.

Reference: ATG Platform documentation set : Version 9.1 - 7/31/09

Sunday, September 11, 2011

ATG Made Easy - part 1

Here is a quick introduction to ATG now Oracle Commerce.

1) Nucleus:
Is a java bean that has a property file contain the default values for

certain attributes , you can inject also other nucleus/components using the property file.
It contain 2 important attributes:
-$Class=/osa/ora/ClassName which represent the class of this nucleus.
-$scope=GLOBAL (default), SESSION or REQUEST ..
The power of the nucleus is that you can configure it with different behavior according to:
-Included Layers
Change the behavior is either by implementing different property values or by even change the nucleus class.
Configuration are placed in config path configured by ATG-Configh-Path in file (in meta-inf folder)
This is called: configuration layering
-Another advantage is you can change the nucleus values from dyna-admin (ATG-Admin interface) at the run time.

You can use the dyna-admin to identify all the layers of such component , using the definitionfiles or view service configurations of the nucleus where you can find the overriding files in any layer, this would be useful in troubleshooting when you change the property/xml files and it is not reflected in your application (last one will be the last overriding one).

You can still resolve global components from the code by using:
(cast to component) Nucleus.getSystemNucleus().resolveName("/osa/ora/.... component full name");
And if you want to resolve it from the request you can use:
ServletUtil.getCurrentRequest().resolveName("/osa/ora/.... component full name");

* Logging:
Defined in in ATG/config path..
You can override the values in your module/nucleus/path according to the layering ...
Some components define debugLevel from 1-15 where 1=minimal , 15=maximum.

2) Droplet:ATG Servlet Bean:
View component called from DSP (Dynamo Server Pages) ...
<@taglib uri="/dspTaglib" prefix="dsp"%>
then you can use them using :
<dsp:page> logic using dsp tags ...

The droplet is needed to encapsulate view logic that is repeated into single location of processing.
Droplet extends DynamoServlet class.

The dsp:droplet tag lets you invoke a servlet bean from a JSP page. It encapsulates programming logic in a server-side JavaBean and makes that logic accessible to the JSP page that calls it. Tag output is held by its parameters, which can be referenced by nested tags.

The following example show how we can use this tag:
<dsp:droplet name="/atg/dynamo/droplet/ForEach">
<dsp:param name="array" bean="/samples/Student_01.subjects"/>
<dsp:oparam name="outputStart">
<p>The student is registered for these courses:</p>
<dsp:oparam name="output">
<li><dsp:valueof param="element"></dsp:valueof></li>
<dsp:oparam name="outputEnd">

We have 3 types of parameters here:
1) Input : The previous example supplies the input parameter array to identify the type of data to process
2) Output : In the previous example, element is an output parameter that contains the value of the current array element.
3) Open : 3 types: marked by dsp:oparam tags, and specify code to execute at different stages of servlet processing :
-outputStart : executed just before loop processing begins
-output : executed during each loop processing
-outputEnd : executed just after the processing completed.

We can use also EL to access the output properties: example :

<dsp:droplet name="/atg/dynamo/droplet/ForEach" var="fe">
<dsp:oparam name="output">
<li><c:out value="${fe.element}"/></li>

ATG Servlet Beans and Servlets
Your Servlet must be a subclass of DynamoServlet. Its service method took DynamoHttpServletRequest and DynamoHttpServletResponse objects as parameters.

These interfaces are subclasses of standard servlet interfaces. DynamoHttpServletRequest extends HttpServletRequest and adds several functions that are used to access ATG servlet bean functionality.
The DynamoHttpServletResponse extends HttpServletResponse and also adds a couple of useful functions.
The DynamoServlet class implements the javax.servlet.Servlet interface. It passes requests to the service method by passing a DynamoHttpServletRequest and DynamoHttpServletResponse as parameters.
The DynamoServlet class extends atg.nucleus.GenericService, which allows an ATG servlet bean to act as a Nucleus component. This means that the ATG servlet bean has access to logging interfaces, can be viewed in the Component Browser, and has all the other advantages of a Nucleus service.
A servlet invoked with the DSP tag library tag need not be a subclass of DynamoServlet; it only needs to implement the javax.servlet.Servlet interface.
Any servlets that you write for other application servers can be used inserted in JSPs with DSP tag library tags. However, those servlets lack access to all other facilities available to Nucleus components. If you write ATG servlet beans from scratch, the DynamoServlet class provides an easier starting point.

3) ATG Repositories:
-Profile, Content, Commerce repositories is example for these repositories.
*Repository Item : (atg.repository.RepositoryItem) : must have id +/- any other properties.
Some droplets use these repositories:
-RQQueryForEach & RQQueryRange
-ItemLookupDroplet & RepositoryLookup..

-How to define repository XML:
<item-descriptor name="...." ....>
..... description here.....

-Elements attributes could be :
1) Transient: not persisted in DB.
2) Persisted: stored in DB , you need to provide the DB table/column.
the table relation type could be "primary" , "auxiliary" or "multi".
3) Derived: Calculated with each request, you need to provide derivation

class that extends DerivationMethodImpl class.

Properties could be:
-Simple : string, integer..
-Enumerated: option value=x code=x where the code is what stored in DB.
-Items: refer to other items.

Also it could be:
-Single Values
-Multi-values: Array, Set and Map of other objects.

It is supported in XML...
Base ---> item-descriptor ....sub-type=class
Child --> item-descriptor ....super-type=class

- You can also use layering to override , replace , remove elements according to your needs...
just specify the xml-combine attribute with either:
-"remove", "append" or "replace".

Some repositories already defined and you can extend the repository/modify by having a layer (or layers) that define that repository definition file as:
userprofile.xml (for User Repository)
customCatalog.xml (for Commerce catalog repository)
priceLists.xml (for commerce price lists)

NB: You can cascade in relations by cascade="delete"
NB: You can group some properties for better performance: property=first group=full_name and property=last group=full_name
NB: You can use id generator to generate Ids for repository by :
<item-descriptor name=xxx id-space-name="unique name for id generation"
This is either auto-generated or explicitly configured in /atg/dynamo/sevices/idspaces.xml

**Object Types:
1) Repository: used to get repositoryItem (with its id) and to obtain repositoryView
--> Methods : getItem() and getView()
2) RepositoryItem : represents the individual item of the repository.
--> Methods: getRepositoryValue()
3) RepositoryView : to execute queries to return array of repositoryItem.
--> Methods: executeQuery()
4) MutableRepository : methods to insert/update/detlete repostiryItems
--> Methods : addItem(), createItem(), updateItem(), removeItem() and getItemForUpdate()
5) MutableRepositiryItem : used to change/set values of repositoryItem.

Example: To Update
MutableRepositoryItem mutableItem=mutableRepository.getItemForUpdate(id,"Article");
mutableItem.setProperty(...., ...);

Example: To create
MutableRepositoryItem mutableItem=mutableRepository.createItem("Article");
mutableItem.setProperty(...., ...);

**Using RQL queries: either as statement (String) or named query (in repository XML)
RepositoryView myView=repos.getView("Article");
Object rqlParams=new Object[];
rqlParams[0]= "Osama";
RqlStatement statement=RqlStatement.parseRqlStatement("name = ?0");
RepositoryItem[] artcileList=statement.executeQuery(myView, reqlParams);
..... loop over the results......

<item-descriptor name="Article">
<rql> name = ?0 </rql>
</item-descriptor name="Article">

And in the code:
NamedQueryView view=(NamedQueryView) myView;
Query namedQuery=view.getNamedQuery("MyQuery");
ParameterSupportView pView=(ParameterSupportView)myView;
RepositoryItem[] artcileList=pView.executeQuery(namedQuery, rqlParams);

In repositories definition files, you can configure the cache mode and cache size and other cache related properties , you can check the cache statistics by accessing the dyna-admin page of these components..
The important thing you need to be aware of is the locking cache module where you have 1 lock server in the cluster and other servers have client lock managers to manage the requests to this server.. a configurable timeout for obtaining such locks, you shouldn't use this lock mode unless you need this as in OrderRepository for example.
• /atg/dynamo/service/ServerLockManager
• /atg/dynamo/service/ClientLockManager

Example of how you can obtain a lock over an order:
TransactionDemarcation td = new TransactionDemarcation();
try {
td.begin(getTransactionManager(), td.REQUIRED);
LockReleaser lr = new LockReleaser(getClientLockManager(),
<insert your code here>
} catch (Exception de) {
return false;
} finally {
try {
catch(TransactionDemarcationException tde) {

Reference: ATG Platform documentation set : Version 9.1 - 7/31/09