Monday, September 12, 2011

ATG Made Easy - part 2

4) Form Handlers:
*Built-in Form handlers or Cutom form handlers extends atg.droplet.GenericFormHandler
*Function is to handle the html input , validation , apply business logic, handle errors and redirect to other pages..
-RepositoryFromHandler: CRUD operations of the repositories.
It has a dynamic bean property mapping and access them using notations <dsp:valueOf bean=".....formHandler.value.xxxx"
Can be used as well in input/textarea as value="..." or paramValue="..." or beanValue="...."
You can use converters as well.

* Submit Buttons:
<dsp:input bean="........update/create/delete/cancel" type="submit" value="caption"/>
The real method name need to be handlexxxx.

* Page flow: using (Create/update.delete) successURL or errorURL and cancelURL
You can put it as a hidden field in the form.
You can also use the sendReditect(String url,request) //outside ATG app. or sendLocalRedirect(String url,request) //when you direct inside ATG app.

* But in case of page re-direction you need to return false in the handler method to prevent the normal page flow using successURL,...etc.

* Errors : formError(boolean ...) called if error happen.
formExceptions --> Vector of all errors.
propertyException --> dictionary map between (rep. item property & exception)

*To add errors : addFormException(new DropletException("Invalid Item to add"));

Scopes: Request (for each request) , Session (once per session)

5) Profile: "atg.userprofiling.Profile"
-Session scoped, configured in userProfile.xml in /atg/userProfiling
You may extend it and customize it using the xml property xml-combine="remove/append/replace"
This is for external users , for internal users you may use InternalProfileRepository.

*ProfileFormHandler: nucleous component that facilitate login, logout, register, update, and changePassword of the users.
It has the same features of form handlers : successURL/errorURL for all these actions , also handlexxx method of each of these actions where you can use to customize the application.

6) Servlet Pipelines :
Chain of processors configured by XML..
Treating the request-handling process as a pipeline of independent elements allows request-handling to be treated in a component-oriented manner.

PipeLineManager call them by invoke runProcessor() the return code decide the next pipeline (implement pipeline processor)
It uses JTA, if one processor choose to rollback all transaction is rolledback.

Dynamo uses two request-handling pipelines:
• DAS servlet pipeline for JHTML requests
• DAF servlet pipeline for JSP requests

This difference because of Because JHTML is a proprietary language, it relies on the page compiler provided in the DAS servlet pipeline to generate JHTML into a servlet that is rendered as HTML by the application server.

The last servlet in the pipeline is TailPipelineServlet. It is responsible for calling FilterChain.doFilter(), which invokes the next filter defined in web.xml. The web application, unless it uses ATG Portal, does not include other servlet filters by default.

You can construct pipelines used by your own applications, or you can customize the existing pipelines in Dynamo by adding servlets. Components in a servlet pipeline should be globally scoped.

The heart of the servlet pipeline is the PipelineableServlet interface. All servlets in a pipeline must implement this interface. Servlets that implement PipelineableServlet have a nextServlet property that points to the next component to invoke. This is the primary pipelining mechanism used by the standard request-handling pipelines. Dynamo also provides an implementation class PipelineableServletImpl that implements this interface; your own classes can implement PipelineableServlet by subclassing PipelineableServletImpl.

The PipelineableServlet interface has two sub-interfaces that provide additional mechanisms for determining the next component to invoke: InsertableServlet and DispatcherPipelineableServlet. Servlets that implement the InsertableServlet interface have an insertAfterServlet property that enables the servlet to insert itself in a specific spot in the pipeline. The key advantage of this mechanism is that it does not require modifying any existing servlets in the pipeline.

To add an InsertableServlet to the servlet pipeline:
1. Write your servlet, extending InsertableServletImpl.
2. Define it as a component in the Nucleus hierarchy. It does not really matter where you put your servlet in the component hierarchy (though the location affects references to other components if you use relative pathnames).
3. Set the insertAfterServlet property of your servlet to point to the path of the pipeline servlet you want your servlet to follow. For example, if you want your servlet to follow the DynamoServlet in the pipeline, use:
insertAfterServlet=/atg/dynamo/servlet/pipeline/DynamoServlet
4. Add the path to your servlet to the initialServices property of /atg/dynamo/servlet/Initial:
initialServices+=/myServlet
Note: Subclasses of InsertableServletImpl that need to add their own logic to doStartService should be sure to call super.doStartService() at some point.
When your servlet finishes processing, it passes the request and response objects to the next Servlet in the pipeline. InsertableServletImpl defines a method called passRequest, which automatically passes the request and response objects to the next servlet in the pipeline.

If the servlet does not call the passRequest() method, no thing return to the browser..

Commonly used pipeline: updateOrder/loadOrder/refreshOrder/processOrder in OrderManager

Configured in /atg/commerce/commercePipeline.xml
where you can add your processors to save/load/update special data during such operations..

<pipeline manager>
<pipeline chain name="processOrder">
....
<pipeline link name="...." transaction"TX..." xml-combine="replace"
<processor jndi=/atg/...../>
<transition returnValue="1" link="...."/>
</pipeline link>
.....
....
</pipeline chain>
</pipeline manager>


This diagram shows a typical path a request takes through the DAF servlet pipeline
when ATG Content Administration is running.

To run a pipeline from the code:
HashMap map = new HashMap(13);
map.put("OrderManager", this);
map.put("CatalogTools", getOrderTools().getCatalogTools());
map.put("OrderId", pOrderId);
map.put("OrderRepository", getOrderTools().getOrderRepository());
map.put("LoadOrderPriceInfo", Boolean.FALSE);
map.put("LoadTaxPriceInfo", Boolean.FALSE);
map.put("LoadItemPriceInfo", Boolean.FALSE);
map.put("LoadShippingPriceInfo", Boolean.FALSE);

PipelineResult result;
try
{
result = getPipelineManager().runProcess("loadOrder", map);
}
catch(RunProcessException e)
{
throw new CommerceException(e);
}

*Pipeline registry is loaded during the ATG server start.

If you need to resolve something in the request:
Request.resolveName("/osa/ora/... full component name");



7) Initials Services:
If you want to load a service during start up of the server , you have many options:
-Add servlet/jsp to web.xml and specify load-on-startup --> 1 or small number like any JEE application.
-Add class that extends GenericService to initial.properties

This property file let you to load them while the server is starting up...
One advantage is that you can also register a scheduler that extends this genericService that implements Schedulable (or do the same using the code).

The most important method in generic service is
public void doStartService() throws ServiceException
Where you can have your code in that method.

It is to be noted that in different layers you can add/remove from initial services:

initialServices+=/myServlet
initialServices-=/myServlet

8) Scheduler Tasks:

When the Scheduler executes a job, it calls performScheduledTask on the object that performs the task, which must implement atg.service.scheduler.Schedulable. Typically, the component that schedules the task is also the Schedulable component that executes it, but this is not strictly required.
When a component schedules a task, it must provide the Scheduler with the following information:
• A name for the scheduled job; used only for display to the administrator.
• The name of the component scheduling the job; used only for display to the administrator.
• The Schedulable object that handles the job; typically, the same as the component that schedules the job.
• A flag that indicates how to run the job:
**In a separate thread.
**In the same thread as other scheduled services.
**In a dedicated, reusable thread.

All of this information is encapsulated in a ScheduledJob object, which is passed to the Scheduler’s addScheduledJob() method.
When a job is added to the Scheduler, the Scheduler returns an integer job ID, which you can later use to reference that job. For example, to stop a scheduled job, you can call removeScheduledJob on the Scheduler, passing in the ID of the job to stop.

- 3 types of scheduler: The different types of Schedules can also be created programmatically by creating instances of RelativeSchedule, PeriodicSchedule, or CalendarSchedule.

**PeriodicSchedule specifies a task that occurs at regular intervals, in this format:
schedule=every integer time-unit[ with catch up]
e.g. schedule=every 30 minutes
**RelativeSchedule specifies a time relative to the current time, in this format:
schedule=in integer time-unit
e.g. schedule=in 30 seconds
**CalendarSchedule schedules a task to run at designated points on the calendar. For example, you might schedule a task for 2:30am each Sunday, or a specific date such as January 1. The format for a CalendarSchedule looks like this:
schedule=calendar mos dates wkdays mo-occurs hrs mins
e.g. calendar * 1,15 . * 14 5

Information about all tasks being run by the Scheduler is available through the Component Browser. If you go to the page for the /nucleus/atg/dynamo/service/Scheduler component, you see a list of all tasks monitored by the Scheduler.

The SingletonSchedulableService, a subclass of the standard SchedulableService, works in conjunction with a client lock manager to guarantee that only one instance of a scheduled service is running at any given time throughout a cluster of cooperating Dynamo servers. This provides the foundation for an architecture where scheduled services can be configured on multiple Dynamo servers to provide fault tolerance, but the system can still ensure that each task is performed at most once.

Reference: ATG Platform documentation set : Version 9.1 - 7/31/09

3 comments: