lunedì 1 ottobre 2012

CMS or Web-Framework

Content Management Solutions (CMS) are platforms you can install on your web server that allow you to choose or create a theme and begin adding content to your website. 

CMS solutions are great for blogs, news sites, and basic corporate or informational websites where the intent is to have pages with mostly text, links and images on them. 

For example Wordpress and Drupal are CMS platforms (Wordpress started off as a Blog platforms and has evolved into a CMS). 

Also, some CMS solutions are more advanced and can do advanced websites they tend to be more specific and/or cost money.

In addition to basic text, links and images, most CMS solutions allow for additional plugins that allow Web 2.0 items to be embedded in the content area of a page or in the menu or sidebar. 

By Web 2.0 I mean more advanced features that create dynamic content, like Google Maps or interactive content. Some of these things can be easily embedded without plugins depending on how easy the content creator has made it to embed. 

Wordpress for instance has thousands of plugins.

Some plugins are not CMS specific. A good example would be Disqus, which lets you add comments to your website by adding a small amount of code to your html.

A web framework is just a software framework built to work on website code. 

Frameworks can be in any language. Trying to mesh frameworks from different languages can be a challenge though. Usually, part of the framework code is built to work on the server side and is never seen by the client. 

Frameworks are small to large size code packages that can be used to build websites more quickly. They can add a vast array of functionality to your site. Some examples are CakePHP, anything installed with NuGet for .Net, or Rails.
Finally, another way to look at it is that most CMS solutions, are web frameworks themselves. They are just on the larger end of the code base scale.

JBoss AS 7




This is an introduction tutorial to the newest JBoss AS 7 which appeared in the download section for the first time in Nov 2010. 

JBoss AS 7 does not come with an installer; it is simply a matter of extracting the compressed archive.
After you have installed JBoss, it is wise to perform a simple startup test to validate that there are no major problems with your Java VM/operating system combination. To test your installation, move to the bin directory of your JBOSS_HOME directory and issue the following command:


standalone.bat # Windows users
$ standalone.sh # Linux/Unix users      

The above command starts up a JBoss standalone instance that's equivalent of starting the application server with the run.bat/run.sh script used by earlier AS releases. 

You will notice how amazingly fast is starting the new release of the application server; this is due to the new modular architecture, which only starts up necessary parts of the application server container needed by loaded applications.

If you need to customize the startup properties of your application server, then you need to open the standalone.conf (or standalone.conf.bat for the Windows users) where the memory requirements of JBoss are declared. 

Here's the Linux core section of it:
 

if [ "x$JAVA_OPTS" = "x" ]; then
JAVA_OPTS="-Xms64m -Xmx512m -XX:MaxPermSize=256m -Dorg.jboss.resolver.
warning=true -Dsun.rmi.dgc.client.gcInterval=3600000 - Dsun.rmi.dgc.
server.gcInterval=3600000"
fi

So, by default, the application server starts with a minimum memory requirement of 64MB of heap space and a maximum of 512MB

This will be just enough to get started, however, if you need to run a core Java EE application on it, you will likely require at least 1GB of heap space or 2GB or more depending on your application type. Generally speaking, 32-bit machines cannot execute a process whose space exceeds 2GB, however on 64 bit machines, there's essentially no limit to process size.

You can verify that the server is reachable from the network by simply pointing your browser to the application server's welcome page, which is reachable by default at the well-known address: http://localhost:8080

Connecting to the server with the command line interface

If you have been using previous releases of the application server you might have heard about the twiddle command -line utility that queried the MBeans installed on the application server. 
This utility has been replaced by a more sophisticated interface named the Command Line Interface (CLI), which can be found in the JBOSS_HOME/bin folder.

Just launch the jboss-cli.bat script (or jboss-admin.sh for Linux users) and you will be able to manage the application server via a shell interface. 


Now issue the connect [Ipaddress:port] command to connect to the management interface:

[disconnected /] connect
Connected to localhost:9999
 

Connecting to a remote host
Starting from the 7.1.0 release there is security enabled by default on the AS 7.1 distribution for remote clients. 

Thus management interfaces are secured by default to prevent unauthorized remote access whilst still allowing access for local users for an easier out of the box experience.

If you are connecting to a remote host controller, then you need to provide your credentials:
 
./jboss-cli.sh --connect 192.168.1.1
 Authenticating against security realm: 192.168.1.1
 Username:admin
 Password:*****
 Connected to standalone controller at 192.168.1.1:9999
 [
 standalone@192.168.1.1:9999 /]
 
To add new users to your management interfaces, then you need use the add-user.sh/add-user.bat file.

The users defined in the management realm are used for the authentication of remote CLI clients. 

By definition all HTTP access is considered remote, thus if you want to use the Web administration console you will need to define users at first.
This utility requires the following pieces of information for the new user: -
  • Type - You can choose between application users (contained in application-users.properties) and management users (mgmt-users.properties) which is default.
  • Realm - this is the name of the realm used to secure the management interfaces, by default it is 'ManagementRealm' so you can just press enter, if you change the realm as described below this is where you need to enter your new realm name.
  • Username - the username needs to be alpha numeric.
  • Password - At the moment the main restriction on this field is that is can not be the same as the username.
Here's how to add a new Management user.

What type of user do you wish to add?
  a) Management User (mgmt-users.properties)
  b) Application User (application-users.properties)
 (a): a
 
 Enter the details of the new user to add.
 Realm (ManagementRealm) :
 Username : user1234
 Password :
 Re-enter Password :
 About to add user 'user1234' for realm 'ManagementRealm'
 Is this correct yes/no? yes
      

Stopping JBoss
Probably the easiest way to stop JBoss is by sending an interrupt signal with Ctrl+C.
However, if your JBoss process was launched in the background or rather is running on another machine (see next), then you can use the CLI interface to issue an immediate shutdown command:


[disconnected /] connect
Connected to localhost:9999
[localhost:9999 /] :shutdown

There is actually one more option to shutdown the application server, which is pretty useful if you need to shut down the server from within a script. This option consists of passing the --connect option to the admin shell, thereby switching off the interactive mode:

jboss-cli.bat --connect command=:shutdown # Windows
jboss-cli.sh --connect command=:shutdown # Unix / Linux
 
Restarting JBoss
The command-line Interface contains a lot of useful commands. One of the most interesting options is the ability to reload all or parts of the AS confi guration using the reload command . When issued on the root node path of the AS server, it is able to reload the services configuration:
[disconnected /] connect
Connected to localhost:9999
[localhost:9999 /] :reload
 

The new server structure
The first thing you'll notice when you browse through the application server folders is that its file system is basically divided into two core parts: the dichotomy reflects the distinction between standalone servers and domain servers.

JBoss as 7 tutorial

A JBoss domain is used to manage and coordinate a set of application server instances. 

JBoss AS 7 in domain mode spawns multiple JVMs which build up the domain. 

Besides the AS instances, two more processes are created: the Domain Controller which acts as management control point of the domain and the Host Controller which interacts with the domain Controller to control the lifecycle of the AS instances.


jboss 7 tutorial


JBoss AS 7 running in domain mode

In order to launch JBoss AS 7 in domain mode run the domain.sh/domain.cmd script from the bin folder.


Deploying applications on JBoss AS 7
Applications are deployed differentely depending on the type of server. 
If you are deploying to a domain of servers then you need the Command Line Interface because the application server needs to be informed on which server group/s  the deployment is targeted.

Ex. Deploy an application on all server groups:
 

deploy MyApp.war --all-server-groups
 
Ex. Deploy an application on one or more server groups (separated by a comma):

deploy application.ear --server-groups=main-server-group

If you are deploying to a standalone server then you can either use the CLI or drop the deployment unit into the server deployments folder.

The deployments folder is the location in which users can place their deployment content (for example, WAR, EAR, JAR, SAR fi les) to have it automatically deployed into the server runtime. Users, particularly those running production systems, are encouraged to use the JBoss AS management APIs to upload and deploy deployment content instead of relying on the deployment scanner subsystem that periodically
scans this directory

As soon as the deployer HD scanner detects the application, the module is moved to the work folder of the application, leaving a placeholder Test.war.deployed file in the deployments folder.

jboss 7 tutorial
 
Note: With the default configuration, packaged archives (.ear, .war, .jar, .sar) are automatically deployed. Exploded archives need adding a .dodeploy empty file in the deployments folder to trigger deployment.<

venerdì 7 settembre 2012

Microsoft Management Console (MMC) saved query





The saved queries function in the Microsoft Management Console (MMC) Active Directory Users and Computers snap-in lets you create, save, and organize queries that you'll use repeatedly for administering Active Directory (AD) objects. 
You can create queries using the wizardlike options on the New Query dialog box, or you can define custom searches that can be used to gather whatever objects you like simply by keying in your own LDAP queries.
Here are the steps to follow in the Active Directory Users and Computers console to create a Custom Search saved query:
  1. Right click the Saved Queries folder and select New, Query.
  2. Enter an appropriate Name and Description.
  3. Make sure the query root is set to the domain level you want the query to pertain to.
  4. Select the Include subcontainers check box if you want the query to search all subcontainers.
  5. Click Define Query.
  6. In the Find dialog box, click the Find drop-down arrow and select Custom Search.
  7. On the Advanced tab, enter your LDAP query string into the Enter LDAP query box.
  8. Click OK twice.
What follows is a list of queries that can help you administer AD—and get you started on the road to using saved queries to simplify AD management.
Groups Like Service (finds any group name that contains the word service)
(objectcategory=group)(samaccountname=*service*)
Description Like Service (finds accounts in which the description contains the word service)
(objectcategory=person)(description=*service*)
Groups Like Admin (finds any groups whose name contains the word admin)
(objectcategory=group)(samaccountname=*admin*)
Universal Groups (finds groups with universal scope)
(groupType:1.2.840.113556.1.4.803:=8)
Groups with No Members (finds groups that have no members in them)
(objectCategory=group)(!member=*)
Note: The ! symbol means "Not" and * means "Has a value," so the combination of the two evaluates to “Doesn’t have a value.”
Global, Domain Local, or Universal Groups (finds any group defined as a Global Group, a Domain Local Group, or a Universal Group)
(groupType:1.2.840.113556.1.4.804:=14)
Global, Domain Local, or Universal Groups with No Members (finds any group defined as a Global Group, a Domain Local Group, or a Universal Group that has no members)
(groupType:1.2.840.113556.1.4.804:=14)(!member=*)
User Like Service (finds any account ID that has a name containing the word service)
(objectcategory=person)(samaccountname=*service*)
Password Does Not Expire (finds user accounts with nonexpiring passwords)
objectCategory=person)(objectClass=user)(userAccountControl:1.2.840.113556.1.4.803:=65536)
No Employee ID (finds any user account that has no employeeid value)
(objectcategory=person)(!employeeid=*)
No Login Script (finds accounts that don't run a logon script)
(objectcategory=person)(!scriptPath=*)
No Profile Path (finds accounts that don’t have roaming profiles)
(objectcategory=person)(!profilepath=*)
Must Change Password and Not Disabled (finds nondisabled accounts that must change their password at next logon)
(objectCategory=person)(objectClass=user)(pwdLastSet=0)(!useraccountcontrol:1.2.840.113556.1.4.803:=2)
UserList Exclude Disabled Account (finds all user accounts except those that are disabled)
(objectCategory=person)(objectClass=user)(!useraccountcontrol:1.2.840.113556.1.4.803:=2)
Locked Out Accounts (finds all locked out accounts)
(objectCategory=person)(objectClass=user)(useraccountcontrol:1.2.840.113556.1.4.803:=16)
Domain Local Groups (finds groups with Domain Local scope)
(groupType:1.2.840.113556.1.4.803:=4)

Users with Email Address (finds accounts that have an email address)
(objectcategory=person)(mail=*)
Users with No Email Address (finds accounts with no email address)
(objectcategory=person)(!mail=*)

Connectors on JBoss

 

Connectors on JBoss 

Implementation of the J2EE Connector Architecture (JCA)

JCA is a resource manager integration API whose goal is to standardize access to non-relational resources in the same way the JDBC API standardized access to relational data

The purpose of this notes is to introduce the utility of the JCA APIs and then describe the architecture of JCA in JBoss.

 

JCA Overview

J2EE 1.4 contains a connector architecture (JCA) specification that allows for the integration of transacted and secure resource adaptors into a J2EE application server environment. 

The JCA specification describes the notion of such resource managers as Enterprise Information Systems (EIS). 

Examples of EIS systems include enterprise resource planning packages, mainframe transaction processing, non-Java legacy applications, etc. 

The reason for focusing on EIS is primarily because the notions of transactions, security, and scalability are requirements in enterprise software systems. However, the JCA is applicable to any resource that needs to integrate into JBoss in a secure, scalable and transacted manner. 

In this introduction we will focus on resource adapters as a generic notion rather than something specific to the EIS environment.
The connector architecture defines a standard SPI (Service Provider Interface) for integrating the transaction, security and connection management facilities of an application server with those of a resource manager. 

The SPI defines the system level contract between the resource adaptor and the application server.
The connector architecture also defines a Common Client Interface (CCI) for accessing resources. 

The CCI is targeted at EIS development tools and other sophisticated users of integrated resources. The CCI provides a way to minimize the EIS specific code required by such tools. 

Typically J2EE developers will access a resource using such a tool, or a resource specific interface rather than using CCI directly. The reason is that the CCI is not a type specific API. To be used effectively it must be used in conjunction with metadata that describes how to map from the generic CCI API to the resource manager specific data types used internally by the resource manager.

The purpose of the connector architecture is to enable a resource vendor to provide a standard adaptor for its product. A resource adaptor is a system-level software driver that is used by a Java application to connect to resource. 

The resource adaptor plugs into an application server and provides connectivity between the resource manager, the application server, and the enterprise application. A resource vendor need only implement a JCA compliant adaptor once to allow use of the resource manager in any JCA capable application server. 

An application server vendor extends its architecture once to support the connector architecture and is then assured of seamless connectivity to multiple resource managers

Likewise, a resource manager vendor provides one standard resource adaptor and it has the capability to plug in to any application server that supports the connector architecture. 

The relationship between a J2EE application server and a JCA resource adaptor

 Figure 5.1. The relationship between a J2EE application server and a JCA resource adaptor

 

The application server is extended to provide support for the JCA SPI to allow a resource adaptor to integrate with the server connection pooling, transaction management and security management facilities. This integration API defines a three-part system contract.

  • Connection management : a contract that allows the application server to pool resource connections. The purpose of the pool management is to allow for scalability. Resource connections are typically expense objects to create and pooling them allows for more effective reuse and management.
  • Transaction Management : a contract that allows the application server transaction manager to manage transactions that engage resource managers.
  • Security Management : a contract that enables secured access to resource managers.

 

 

 

 


First Aid of JVM Crash Issues

Java Virtual Machine is a Native engine which allows our Java Applications to run. 
It performs the code optimization to improve the performance. 
In correct tuning, Low memory allocation, extensive code optimization, bad garbage collection strategy, API code leaking…etc are some of the reasons which may cause the JVM to crash.
Analyzing a JVM Crash is one of the very interesting and little time taking process sometimes it is even little complex to find out the root cause of the JVM Crash. 
Here in this article we will see some of the common mistakes, first aid solutions/debugging techniques to find out what kind of informations we can get by looking into the Core Dump.

What is Core Dump & Where to Find It?

Code dump is usually a Binary file which gets generated by the Operating System when JVM or any other process crashes. 
Sometimes it also happens that the JVM will not be able to generate the Crash dump. 
In Windows Operating Systems it will be generated in the Directory where the “Dr. Watson” tool is installed. 
In Windows it will be usually:  
C:\Documents and Settings\All Users\Application Data\Microsoft\Dr Watson

By default in Unix based Operating Systems the Core Dump files are created in the directory where the Java Program/Server was started even sometimes it is generated in the “/tmp” directory of the Operating System. 
But using the following Java Option we can change it’s the Crash Dump/Heap   Dump generation locations:  -XX:HeapDumpPath=/opt/app/someLocaton/ and  -XX:+HeapDump JVM Options.

NOTE: These Flags does not gurantee that always the Heap/Crash dump will be generated at the time of JVM Crash. There are some more reasons behind the Core Dump not gets generated…like Process Limitations or the Less Disk Quota or unavailability of the Free File Descriptors.



giovedì 23 agosto 2012

Ubuntu Bootup Howto


Directories and Configs


  • /etc/init is where the upstart init configs live. While they are not scripts themselves, they essentially execute whatever is required to replace sysvinit scripts.
  • /etc/init.d is where all the traditional sysvinit scripts and the backward compatible scripts for upstart live. The backward compatible scripts basically run service myservice start instead of doing anything themselves. Some just show a notice to use the "service" command.
  • /etc/init/rc-sysinit.conf controls execution of traditional scripts added manually or with update-rc.d to traditional runlevels in /etc/rc*
  • /etc/default has configuration files allowing you to control the behaviour of both traditional sysvinit scripts and new upstart configs.

Using Services


Please note that generally, you can use either traditional sysvinit scripts and the methods of working with them as well as the new upstart configs and the command: "service" interchangeably. It is however recommended you use the new upstart methods which are both forward and backward compatible.
Starting a Service
# Traditional:
/etc/init.d/myservice start
# Upstart
service myservice start

Stopping a Service
# Traditional: 
/etc/init.d/myservice stop
# Upstart
service myservice stop
Getting a list of Services
# Traditional:
ls /etc/init.d
# Upstart: 
service --status-all
  • Note: Upstart method will show both traditional and upstart services.

Adding a Service to Default runlevels
# Traditional
update-rc.d apache2 defaults
  • Upstart: there is no concept of runlevels, everything is event driven with dependencies. You would add an upstart config to/etc/init and potentially source a config file in /etc/default to allow users to override default behaviour.
Removing a Service from Default runlevels
# Traditional - Something along the lines of
rm /etc/rc*/*myscript
  • Upstart: If no config is available in /etc/default, edit config in /etc/init

    Show the List of Installed Packages on Ubuntu or Debian



    The command we need to use is dpkg –get-selections, which will give us a list of all the currently installed packages.
    $ dpkg --get-selections
    adduser                                         install
    alsa-base                                       install
    alsa-utils                                      install
    apache2                                         install
    apache2-mpm-prefork                             install
    apache2-utils                                   install
    apache2.2-common                                install
    apt                                             install
    apt-utils                                       install
    The full list can be long and unwieldy, so it’s much easier to filter through grep to get results for the exact package you need. For instance, I wanted to see which php packages I had already installed through apt-get:
    dpkg --get-selections | grep php
    libapache2-mod-php5                             install
    php-db                                          install
    php-pear                                        install
    php-sqlite3                                     install
    php5                                            install
    php5-cli                                        install
    php5-common                                     install
    php5-gd                                         install
    php5-memcache                                   install
    php5-mysql                                      install
    php5-sqlite                                     install
    php5-sqlite3                                    install
    php5-xsl                                        install
    For extra credit, you can find the locations of the files within a package from the list by using the dpkg -L command, such as:
    dpkg -L php5-gd
    /.
    /usr
    /usr/lib
    /usr/lib/php5
    /usr/lib/php5/20060613
    /usr/lib/php5/20060613/gd.so
    /usr/share
    /usr/share/doc
    /etc
    /etc/php5
    /etc/php5/conf.d
    /etc/php5/conf.d/gd.ini
    /usr/share/doc/php5-gd
    
    Now I can take a look at the gd.ini file and change some settings around…

venerdì 3 agosto 2012







Java EE Patterns - Rethinking Best Practices

I'd like to make you aware of the excellent book Real World Java EE Patterns - Rethinking Best Practices by Adam Bien (blog), a Java Champion and renowned consultant, software architect and Java EE standardization comitee member. I'd absolutely recommend it to any architect or developer serious with Java EE 5 or 6 development, even to those not planning to use EJB 3.x (at least prior to reading the book :)). It's a must-read complement to the now a little outdated Core J2EE patterns as it updates the patterns for the new bright world of EJB 3/3.1 while discarding some of them and introducing some new, extremely useful patterns and techniques.

The book starts with an overview of the evolution of Java Enterprise Edition and the hard issues it solves for us, continues with the new and updated patterns and strategies and concludes with an introduction of two opposite architectures you can build with Java EE 5/6, namely lean SOA and domain-driven (which itself makes it worth reading).

What I really appreciate in addition to that valuable content is that for each pattern there is a section on testing and documentation and a really good evaluation of consequences in terms of maintainability, performance and other such qualities. You will find there also many code samples and beautiful applications of the Builder and Fluent API patterns.

The main message is that EJB 3.x is so lean and powerful that we must justify why NOT using it - and when using it, you should be very pragmatic and only introduce patterns, layers and principles if they bring real value.

A summary of the patterns

Because not only abstractions but also my memory is leaky :-), I've written down the key points as a reference and a reminder. It will be of a rather limited value to anybody else as it's closely bound to the structure and content of my mind, yet I hope it could give you a good idea of what is inside the book and why you should go and pick it up immediately :-).

A general rule: "An introduction of another layer of abstraction or encapsulation causes additional development and maintenace effort." [p137] Thus it must provide some real added value to be justifiable.
Notion: New or radically different patterns are marked with *

BUSINESS TIER

Service Facade (Application Service?!)
  • [JH: The old name should likely be Session Facade, not A. S.]
  • The boundary class used by UI
  • A transaction boundary too (SOA: starts a new tx)
  • Coarse-grained API
  • Usually @Stateless
  • Either contains a simple business logic (i.e. is merged w/ a Service) or delegates to Services/DAOs (incl. EntityManager)
Service (Session Facade?!)
  • [JH: The old name should likely be Application Service, not S.F.]
  • Fine-grained, reusable logic in an EJB with local access only, product of decomposition
  • Used by a Service Facade
  • Usually @Stateless s Tx=Mandatory
  • Uses EntityManager and/or specialized DAOs
  • In SOA arch. w/ anemic objects the behavior is here contrary to the PDO below
Persistent Domain Object (Business Object)*
  • Rich domain object, ie. having rich behavior/bus.logic and persistent (x anemic structure of SOA)
  • Forces:

    • Complex business logic
    • Type-specific behavior necessary (profits from inehritance, polymorphism) - e.g. validation rules are domain object related and sophisticated
    • DB may be adjusted for JPA needs

  • Solution: @Entity; getter/setters only when justified (x anemic structures, i.e. state is hidden), has methods modeling behavior and changing its state (creating other entities..); methods named fluently (domain-driven API)

    • Only the cross-cutting/orchestration logic impl. in a Service
    • Created/maintained by a Service, S.Facade or Gateway (CR(U)D methods)
    • Requires the entity to stay attached => buz.+present. tier in the same JVM
Gateway*
  • Provides access to the root PDOs, exposes them to the present. tier (opposed to S.Facade, which hides the logic impl.)
  • It's necessary to manage PDOs who're unaware of EntityManager and to hold client state (the PDOs) over conseq. executions
  • The presentation tier must be in the same JVM
  • Essential for the (rich) domain-driven architecture
  • Forces:

    • PDOs are already well encapsulated, additional encaps./layering unnecess.
    • DB may be changed as needed
    • changes of UI are likely to affect DB anyway
    • An interactive app., w/ non-trivial validations depending on objects' state
    • Users decide when their changes are persisted, when not

  • Note: Not much suitable for RIA UI for they're running on the client = remotely
  • Solution: @Stateful w/ Extended persist. context and tx=NONE and a single save() method that only starts a tx. (via annot. tx=Req.New) and thus forces Ent.Man. to commit changes performed so far
  • Consequences:

    • Scalability: Doesn't scale as well as stateless but still absolutely sufficient for most apps; depends on cache size, length of sessions; need to find out via load tests
    • Performance may be better as PDOs are loaded only once and accessed locally per ref.
    • Need a well defined strategy for handling stale objects, i.e. the OptimisticLockException, test early
    • Maint.: "In non-trivial, problem-solving applications, a Gateway can greatly improve the maintainability. It simplifies the architecture and makes the intermediary layers superfluous." [114] Unsuitable for SOA arch.w/ many externally exposed services.
    • Productivity, ease of use much higher
Fluid Logic
  • Inclusion of scripting for algorithms/bus.logic that change often so that recompilation/redeployment aren't necessary - JSR-223 (Scripting for the Java Platform)
  • Executed from a Service
Paginator and Fast Lane Reader
  • A Service/S.F. allowing for efficient access to large amounts of mostly read-only data [subset]
  • Former motivation: in EJB 2, a FLR accessed huge quantities of read-only data via JDBC directly for efficiency - JPA is efficient enough in this, providing a way to extract only some attributes and paginate over large results
  • Valid motivation: "JPA is not able to stream objects, rather than load the entire page into memory at once. For some use cases such as exports, batches, or reports direct access to the database cursor would provide better performance." [123]
  • Forces: iteration over large amount of data needed; it cannot be sent at once to the client and must be cached on the server; the client needs only some attributes of the entity; the access is mostly read-only
  • Solution - not a single one but different strategies w/ +/-

    • Paginator and Value List Handler Strategy: sess. bean implementing an Iterator of list of lists of domain objects (a list = a page); either Stateful holding internally the current position or Stateless; uses Query.setFirstResult + setMaxResults. May use Persist.Ctx.EXTENDED to be updatable.
    • Live Cursor and F.L.R. Str(eaming?): to get as fast access as possible when conversion into objects isn't necessary, we may let inject DataSource via a @Resource directly into an EJB and use JDBC; will share Connection w/ any Ent.Mgr. in the same tx

  • Paginator in a domain-driven arch.: "A Paginator is also an optimization, which should only be introduced in case the iteration over a collection of attached PDOs causes performance or resource problems." [p269]
Retired Patterns
  • Service Locator - replaced by DI
  • Composite Entity - JPA entities completely different
  • Value Object Assembler - VOs mostly not needed anymore as entities are POJOs; partly impl. by EntityManager
  • Business Delegate - not needed thanks to DI injecting a Business interface's impl., which doesn't throw EJB-specific, checked exceptions anymore
  • Domain Store -impl. by the EntityManager
  • Value List Handler - implemented by EntityManager; it can also execute native SQL and convert them to entities or a list of primitives

Integration Tier

Data Access Object (DAO)
  • The J2EE 1.4 motivation for DAOs doesn't apply anymore:

    • JPA EntityManager is a DAO
    • Encapsulation of a DB is leaky anyway; you will only rarely change your DB vendor and never a RDBMS for another storage type

  • Thus DAOs are only needed for non-JPA resources and at those rare occassions, where they provide a real added value, such as consolidating often repeated persistence logic to support DRY - or if you application has some special requirements that really justify the effort.
  • Solution: A stateless EJB with @Local interface and @TransactionAttrbute(MANDATORY), accessed by Services/S.Facades.
  • Note: Heavy use of JPA QL would blur the business logic so it might be better to move the creation of queries into dedicated query builders or other utility classes.
  • Strategies (may be combined together, of course):

    • Generic DAO - the results of JPA queries aren't type-safe and a type-safe generic DAO (CrudService) for CRUD and named query search operations (such as <T> T create(T t)) could be more convenient.

      • Suggestion: Use entity class constants for named query names to avoid typos [and help to track their usage].

    • Domain-specific DAO - DAO for a particular entity, which provides an added value aside of wrapping JPA, such as prefetching of dependant objects, managing common relations, results filtering based on the current user etc.
    • Attached-result DAO - the default behavior - JPA entities stay attached for the duration of the transaction and any changes to them will be commited to the DB
    • Detached-result DAO - if it isn't possible to stay attached, e.g. due to the data source's limitations. A common case in JEE is to return a subset of managed entity object graph mapped to transient DTOs for optimization purposes (select new BookLiteView(...) ..).
    • Back-end Integration DAO - encapsulates a Java Connector Architecture, Common Client Interface or similar adapter for a legacy resource to shield the developer from their low-level APIs.
    • Abstract DAO - reusable data access logic can be also inherited instead of delegated to a helper DAO. Purists could complain but "Inheriting from generic DAO functionality streamlines the code and is pragmatic." Especially suitable for DB-driven apps.
Transfer Object (TO) and Data Transfer Object (DTO)
  • Again, the J2EE 1.4 motivation for (D)TOs doesn't apply anymore because detached Entities aren't active elements and are POJOs
  • There may be few reasons where (D)TOs may be appropriate:

    • To provide consumer-specific views to the persistence layer/send a subset of the data graph for performance reasons (see the Detached-result DAO above)
    • To keep outside-facing services binary compatible, which may be necessary for long-lived SOA services that share the same domain model and need to be evolved independently. Adding/changing an entity field needed by a new service isn't possible if the entity is used also by older services/clients.

      • At times it may be necessary to decouple SOA services by replacing hard references to other entities with DTOs that carry their ID and type and act as a proxy for fetching those entities via their own services. Of course this is laborious to code/maintain and less efficient due to multiple service calls so you need a sound reason to justify it.

    • Transfering data from non-JPA sources.
    • To transfer also presentation tier-specific metadata for building UIs on the fly (e.g. @LAbel("Enter password"), @MinLength(4))
    • To provide a simpler form for transfer over RESTFul/SOAP/CORBA/or even ASCII

  • Solution: A Serializable or Externalizable POJO. (Implementing Externalizable allows for providing faster to process and smaller serialized forms and may (should) be automated via code generation. But use with care - it adds lot of code that must be maintained and understood.)
  • Strategies

    • Builder-style TO - fluent API simplifies its constructions and makes it easier to read. Ex.: MyDTO d = new MyDTO.Builder().name("Duke").age(42).build(); (Builder is a nested static class)
    • Builder-style Immutable TO - prescribe the setting of certain attributes - mark them final and only instantiate the DTO when build() is invoked.
    • Client-specific TO - add metadata e.g. for dynamic UI construction and data validation [you likely don't want them in domain Entities] - annotations such as @Label("First Name") are added to the DTO's getters (getFirstName()) . Check JSR-303 Bean Validation.
    • Generic DTO - dynamic (basically a map of attributes), using reflection - for generic UIs. Cons: not type-safe, several times more objects (attribute name, type representation, metadata ..); see Apache BeanUtils.
Legacy POJO Integration
  • If you need to integrate a legacy POJO into a JEE application and it needs to leverage the container services (e.g. security, lifecycle and concurrency management, DI) and participate in transactions, you can simply turn it into s Stateless session bean provided that it complies with the EJB 3 programming restrictions (has the default constructor and, prior to 3.1, an interface). Instead of adding annotations to its source code, which may be not accessible, you use ejb-jar.xml.
  • The overhead introduced by turning it into an EJB is really low and comparable to other DI frameworks.
Generic JCA
  • "It is not possible to access legacy resources such as files, native libraries, or starting new threads from EJBs without violating the programming restrictions." [p181] - see Chapter 21.1.2 of JSR 220
  • Even if you do not care about violating those restrictions, you may care for making your application non-portable.
  • Thus if you want to access a Java EE incompatible resource, perhaps transactionally, in a portable way, you need to use JCA (with the additional benefit of monitoring)
  • The JEE restrictions do not apply to Servlets, MBeans and JCA connectors, of those only JCA can participate in transactions and within a security context.
  • While complete JCA implementation is complex, a minimal one is "surprisingly simple" - it comprises of 2 interfcase, 4 classes, 1 XML file. Two of the classes are highly reusable, the remaining part is resource dependent.
  • You don't even need to implement the Common Client Interface (CCI), which may be too abstract and complex, and provide your own one instead.
  • Depending on your requiremements, you can choose which parts of JCA to implement [JH: perhaps e.g. javax.resource.spi.work when dealing with threads]
  • Example: transactional file access:

    • a Connection {write(String), close() } interface (JCA is connection-oriented) and Connection factory interface to be put into JNDI: DataSource extends Serializable, Referenceable { getConnection() }
    • a (simple) logic in the class FileConnection impl. Connection and javax.resource.spi.LocalTransaction, delegating close() to the GenericManagedConnection (below) and using JCA's ConnectionRequestInfo for implementing hashCode and equals to distinguish this connection from others
    • a simple FileDataSource class impl. DataSource providing a custom ConnectionRequestInfo implementation (e.g. with equals returning true and hashCode 1 to make all connections equal) and using JCA's ConnectionManager to create a connection, casting its output to FileConnection, using the C.M. and a ManagedConnectionFactory supplied via the constructor
    • a generic GenericManagedConnection class impl. ManagedConnection and LocalTransaction (to invoke the F.C.'s corresponding methods), which will actually instantiate the FileConnection and manage its listeners, while also providing a custom impl. of ManagedConnectionMetaData to describe it and returning null for getXAResource. Notice that the app. server uses the event notification to manage the connection.
    • a generic GenericManagedConnectionFactory class impl. ManagedConnectionFactory, Serializable and creating the FileDataSource as the conn. factory and GenericManagedConnection as the ManagedConnection. (matchManagedConnections selects a connection with matching ConnectionRequestInfo from a set of candidates or throws a ResourceException.)
    • an ra.xml file (def. by the specs) containing all the interfaces and classes under resourceadapter/outbound-resourceadapter/connection-definition (managedconnectionfactory-class=GenericM.C.F., connectionfactory-interface=DataSource, connectionfactory-impl-class=FileD.S., connection-interface=Connection, connection-impl-class=FileC., transaction-support=LocalTransaction, authentication-mechanism/a.-m.-type=BasicPassword and /credential-interface=javax.resource.spi.security.PasswordCredential)
    • Finally you pack it all into a .rar JAR, drop it to the server, and configure a connection pool and deploy the data source under some JNDI name, e.g. "jca/FileDS", to make it injectable via @Resource(name="jca/FileDS")
    • Note: Apache Commons File Transactions contains XA-compliant file access and could be easily wrapped w/ this connector

  • "a partial implementation of a JCA adapter could be easier and faster than an attempt to build a comparable solution from scratch" [p195]
Asynchronous Resource Integrator (Service Activator)
  • Invocation of a Service from a Message-Driven Bean (invoked by a messaging system via JMS), bound to a Topic (1/N:M) or Destination (1:1)
  • Prior to EJB 3.1, Service Activator was a work-around to invoke a business method asynchronously by calling it via a MDB, now we've the much simpler @Asynchronous method/type annotation
  • Since EJB 3.1, MDBs are only necessary for integration of front-end or back-end asynchronous clients/resources
  • The MDB's task is to extract the "payload" (usually an Object, Text or BinaryMessage), convert it into meaningful parameters, forward it to a Service and handle errors and "poison messages" (wrong type/content leading to an exception and reprocessing of the message) correctly. It should contain no business logic.

    • A container starts a transaction before onMessage and removes the message when it ends w/o a rollback

  • Note: Some messaging providers such as openMQ provide REST-ful interface (yet not standardized, but watch amqp.org) and can be thus invoked from a presentation or client (browser - ajax) tier
  • "The importance of integration tests in an environment identical to production cannot be emphasized enough." [p201]
  • MDB: "... proper error handling is hard and the debug process a real challenge"[p201]
  • Strategies

    • Front-end Integration - asynch. messages from the presentation or client tier, likely Ajax; the payload is usually XML/JSON
    • Back-end Integration - usually a legacy system; usually can't influence the message

Infrastructural Patterns and Utilities

Patterns that can be used at any layer or are not of architectural significance yet very useful.
Service Starter
  • Initialize an EJB upon server start (e.g. to load a cache)
  • EJB 3.1: @Singleton with @Startup. May also @DependsOn another one.
  • pre-EJB 3.1: a HttpServlet with init() method and load-on-startup 1
Singleton
  • Standardized in EJB 3.1 with @Singleton (in the scope of a single JVM only)
  • By default uses @Lock(LockType.WRITE) and thus can only be accessed by a single thread at a time; user READ for concurrent access
  • Strategies (rather uses)

    • Gatekeeper - limit access to a legacy resource (e.g. due to limited # of licenses or its limited scalability) - this could be done in a JCA adapter but that's a lot more work. It can also serialize access to a single-threaded resource with LockType.WRITE
    • Caching Singleton - holds a cache of mostly read-only data, likely initialized at startup, and accessed concurrently thanks to LockType.READ
Bean Locator - encapsulates JNDI if DI not available (e.g. Stateful EJB can't be injected into a servlet).Use a Fluent GlobalJNDIName Builder to simplify the error-prone process of global JNDI name construction.
Thread Tracker - name a thread after the bean and business method it's currently executing for easier monitoring/troubleshooting (instead of e.g. "http-0.0.0.0-8180-1") via an interceptor (but beware that the interception is several times slower than a direct call)
Dependency Injection Extender
  • Integrate another DI fwrk's managed beans into an EJB via a custom Interceptor, which will invoke the DI framework to inject its beans into the EJB upon each call (e.g. via Guice's Injector.injectMember(invocationCtx.getTarget());)
  • The interceptor must ensure proper bean's lifecycle w.r.t. its scope (request x session x ...)
  • Strategies:

    • Stateful Session Bean Injector - can use per-instance injectors and cache the injected components for the EJB's lifetime
    • Stateless Session Bean Injector - either all the members must be injected before each call or it's necessary to use smart proxies able to react e.g. to transactions for non-idempotent components
    • Transactional Integration - "the injected components already participate in a transaction started in the session bean"

  • Performance - interceptors and DI rely on reflection which is slower than direct calls, yet still much faster than an average DB access
Payload Extractor - factor out the (reusable) type checking and error handling for a MDB message into a reusable interceptor; poison messages moved by the interceptor to a "dead letter queue" via a stateless EJB using the JMS API
Resource Binder - put a custom resource into JNDI using a @Singleton with @Startup and the JNDI API (Context.(re)bind()). Notice that your app. server's proprietary JNDI impl. may enforce some restrictions on the resource object (such as serializability).
Context Holder - you want to pass some context data yet not to add it as a param to each method on the call tree (S.Facade > Service > DAO, indirectly PDO) => use the standard @Resource TransactionSynchronizationRegistry, with the actual context set perhaps by an interceptor. Notice that ThreadLocal may be problematic if a S.Facade invokes a Service and each is from a distinct thread pool = diff. thread.

Pragmatic Java EE Architectures

There are two opposite approaches, with best practices in one being anti-patterns in the other: the SOA architecture and he Domain-driven architecture. Which one is better depends in your requirements.

Lean Service-Oriented Architectures (online article)


  • JEE 5/6 can be used to build the leanest SOA impl. possible - (nearly) no XML, no external libraries, frameworks, only a JAR with a short persistence.xml, even more so in 3.1 where we can drop the @Local interfaces (though that makes testing more difficult)
  • SOA implies coarse-grained, distributed, stateless, atomic, self-contained, independent services with a procedural nature

    • Though in JEE, local access shall be always preferred when possible (performance)

  • Building blocks:

    • Service Facade - @Stateless EJB with TransactionAttribute=REQUIRES_NEW - acts as a remoting and transaction boundary, hides implementation details while providing coarser-grained interface
    • (optional) Services (with @Local and tx=MANDATORY) - implement the actual logic in a procedural way; in simple cases, the Facade may directly use a DAO to avoid a dumb Service w/o any added value
    • (optional) DAOs (incl. EntityManager) - as needed
    • Anemic persistent data structures - entities only hold data exposed via getters/setters, w/o any behavior

      • "Although the anemic object model is considered to be an antipattern, it fits very well into an SOA. Most of the application is data-driven, with only a small amount of object-specific or type-dependent logic." [p260] Plus, for simple apps, they're easier to develop, can be generated, can be detached and sent to a client who won't get access to any business method (but beware the lazy-loading issue).

    • Transfer Objects - an essential pattern, a SOA service must stay binary compatible even if its domain objects change (e.g. a field added, needed by a new service) and thus can't expose directly JPA entities

  • The contract, represented by the Service Facade's @Remote interface and the TOs and visible to clients, is strictly separated from its realization
  • Suitable e.g. if there are multiple different clients (i.e. not only a single web UI)
  • Essential complexity of SOA

    • "SOA aims for reuse, which causes more overhead and requires additional patterns. Service components are, on average, more complex than comparable domain components." [p264]
    • The object graph sent to a client is detached (and serialized) => we must merge the changes when receiving it back, this is complex => added methods to update only parts of the obj.graph => code bloat [p101] - "The essential complexity of service-oriented applications is the synchronization of the PDO graph with the persistent store." [p266] Both writes (client updates a part of the object graph, which must be merged back) and reads (because of unavailability of lazy-loading) are difficult.
    • Lazy loading of detached objects (in the present. tier) impossible => must know/request in advance what to load (=> code bloat)

Objects- or Domain-Driven Design (online article)


  • As opposed to SOA, it relies mainly on well-defined encapsulated (persistent) objects with state and behavior and tends to be stateful.
  • Building blocks:

    • Entity - the (behavior-rich) Persistent Domain Object pattern. "A real domain-driven architecture should actually only be comprised of Entities with aspects." [p256]
    • (optional) Control - a Service/DAO - implements cross-cutting aspects (Entity-independent, reusable logic and orchestration of multiple PDOs) - often for reusable queries, data warehouse jobs, reports etc.; used only exceptionally in DDD
    • Boundary - a Gateway - the border between the UI and the business logic, exposes PDOs as directly as possible; usually stateful (to keep PDOs attached); "only necessary because of the Java EE 5/6 shortcomings ... In an ideal environment, a Gateway could also be implemented as an aspect, or even a static method on a PDO." [p266]
    • (optional) TO - used for optimization purposes when there is a real need

  • Contrary to SOA, here is no separation between the specification and the realization (which is pragmatic, requires less effort)
  • "The main advantage of the stateful domain-driven approach is the transparent synchronization and lazy loading of dependent entities." [p267] [JH: Also, with direct access to PDOs, it's much easier to implement and evolve the UI - see Seam. We could say this is (much) more productive and "agile" than SOA.]
  • Essential complexity of DDD

    • DDD requires a stateful Gateway holding a reference to an attached root PDO - "The statefulness itself isn't a scalability problem, rather than the amount of attached entities during the transaction." [p266] (every entity used will stay cached till the session ends)
    • "... you will have to think about freeing the resources and detaching superfluous PDOs. This could become as complex as the synchronization of detached entities or loading of lazy relations in a service-oriented component." [p267]
    • => perhaps necessary to tune the cache settings or use proxies (e.g. OIDs/References) instead of actual objects and load objects as needed.
    • "Whether the aggressive caching behavior of your provider becomes a problem or not depends on many factors, including the number of concurrent sessions, the session length, and the number of accessed PDOs per session." [p267]

  • It's necessary to find a good compromise regarding the coupling of different (sub)domain PDOs (such as Address, Customer), i.e. between independence of components and ease of use. We must also deal with circular dependencies (think of bidirectional dependencies) between PDOs and thus components. Be pragmatic:

    • "Few bidirectional dependencies between components are harmless." [p268] - especially considering the unnecessary laboriousness and complexity of possible workarounds
    • You can use a proxy that resolves the object on demand using a service, as SOA does, but this is not very transparent; "it actually only obfuscates the dependency between two entities" [p268]
    • The favourite solution is to refactor/restructure your components to get the related PDOs into one, though that breaks the "maximal cohesion" principle

Notes on Terminology


  • The term "anemic domain model" originates likely from Martin Fowler
  • The term "Domain-Driven Design" has been made popular by Eric Evans via his book Domain-Driven Design: Tackling Complexity in the Heart of Software, published 2003