Thursday, November 30, 2006

Best Places To Use Ajax

  1. Form driven interaction.

    Forms are slow. Very slow. Editing a tag (the old way) on a del.icio.us bookmark? Click on the edit link to load the edit bookmark form page, then edit the field and hit submit to wait for the submission to go through, then return to the previous page and scroll down to find the bookmark to see if the tags look right. Ajax? Click on the edit link to instantly start changing tags, click on the submit button to asynchronously send off changes to the tags and quickly see in place what changed, no reloading the entire page.

    1. Form driven interaction- Subset:Linked Select Menus.

      Imagine a T-Shirt with 3 options; Size, Color, and Style. When tracking inventory for your product, you know you have Large, Red, Polo shirts in stock, but you’re out of Small, Blue, T-Shirts… It is frustrating to the user to pick this combination and then receive an error on the checkout page stateing that you are out of stock… and then have to go back to the selection process and reconfigure the item… Using AJAX, you can check the stock of the options as the user picks them and only return or show the items which are in stock.

    2. Form driven interaction- Subset: Autosave.

      Think of someone writing in Word. Which button do they use the most? Save.

      With javascript you can do one better. Not only can you have a save & continue that works just like the del.icio.us forms – you can autosave! Remember to tell the user this, as simply knowing this relaxes quite a lot of people. Properly explained count-down clocks are prefered, for obvious reasons.

  2. Deep hierarchical tree navigation.

    First of all, applications with deep hierarchical tree navigation are generally a nightmare. Simple flat topologies and search/tagging works very well in most circumstances. But if an application really calls for it, use Javascript to manage the topology ui, and Ajax to lessen the burden on the server by lazy loading deep hierarchy data. For example: it’s way too time consuming to read discussion threads by clicking through and loading completely new pages to see a one line response.

  3. Rapid user-to-user communication.

    In a message posting application that creates immediate discussions between people, what really sucks is forcing the user to refresh the page over and over to see a reply. Replies should be instant, users shouldn’t have to obsessively refresh. Even Gmail, which improves on the old hotmail/yahoo mail ‘refresh inbox, refresh inbox’ symptom, doesn’t really push Ajax far enough yet in terms of notifying new mail instantly.

  4. Voting, Yes/No boxes, Ratings submissions.

    It’s really too bad there are no consistent UI cues for Ajax submission, because submitting a vote or a yes/no response is so much less painful when the submission is handled through Ajax. By reducing the time and impact of clicking on things, Ajax applications become a lot more interactive – if it takes a 40 seconds to register a vote, most people would probably pass unless they really care. If it takes 1 second to vote, a much larger percentage of people are likely to vote.

  5. Filtering and involved data manipulation.

    Applying a filter, sorting by date, sorting by date and name, toggling on and off filters, etc. Any highly interactive data manipulation should really be done in Javascript instead of through a series of server requests. Finding and manipulating a lot of data is hard enough without waiting 30 seconds between each change in views, Ajax can really speed this up.

  6. Commonly entered text hints/autocompletion.

    Entering the same text phrases or predictable text phrases is something software/javascript can be good at helping out with. It’s very useful in del.icio.us and GMail, for quickly adding tags/email addresses.

  7. Interactive Errors

    If someone is entering complicated data, it doesn’t make sense to tell them they have failed only after a lengthy submission process. Ajax can speed up this workflow by quickly letting the user know of an error condition before they try to submit. Example: a username chooser, instead of making the user submit the entire form, try a new name and repeat, or keep trying a ‘is this name chosen’ form, the username chooser can simply indicate to the user whether the username is unique or not, while the user is still typing it.

  8. Long Running Queries/Remote Calls

    If a query or a call to a remote webservice is going to take a long time that cannot be avoided, Ajax works well to manage the time a user waits for the call to return. For example, SWiK uses Ajax to fill in results from webservices detailing new projects: a user doesn’t have to wait for Google webservice to return before starting to edit a new project

  9. Computationally Expensive Operations

    Unfortunately, Javascript has a tendency to be quite slow. Complex math or number crunching just isn’t Javascript’s forte. Additionally, heavy Javascript computation can slow the basic user interface to a crawl. An XMLHTTPRequest call can be helpful here, pushing expensive computations to beefier remote servers.

  10. Server Savings
  11. Sometimes, a process users do over and over on a site requires only a small amount of new data to be sent over the wire, but loading entire new pages can be a strain on the servers in bandwidth and resources. Ajax can be used to load pages more efficiently, as seen in various tests. Of course the ease of making new or multiple requests from the server using Ajax also means that it’s easy to overtax server resources as well.

  12. Interactive Panning And Moving Over Data
  13. Moving and scanning over large data sets makes it impracticable to pre-load all of the data. Loading the data just ahead an just behind the user gives the appearance of the entire data set being accessible, and helps eliminate loading times. A great example of this is Google Maps’ scrolling tiles system that gives the effect of moving over a map by picking up tiles behind and placing them ahead of the user, filling them with new data requested via Ajax.

Best Top Ten Open Source,Ajax/DHTML Librearies For Web Developer

Hi, Frnds...I made a list of the top 10 libraries that I have come across or that I personally use. Libraries can be best for a web developers friend. They are great resources to learn from and can save hours and hours of time. These libraries include JavaScript, Ajax, Colors, PHP, and CSS. These should be in any web developers bookmarks, so go ahead and look through these libraries and bookmark your favorite ones. The list is in no particular order.

1) Moo.fx - A superlightweight, ultratiny, megasmall javascript effects library, written with prototype.js. It’s easy to use, fast, cross-browser, standards compliant, provides controls to modify Height, Width, and Opacity with builtin checks that won’t let a user break the effect with multiple crazy clicks. It’s also optimized to make you write the lesser code possible.

2) Rico - An open source JavaScript library for creating rich internet applications. Provides full Ajax support, drag and drop management, and a cinematic effects library.

3) Swat - Developed by silverorange, Swat is an open source web application toolkit built with PHP.

4) ColorCombos - Who would’ve thought a color library would end up mixed in with a bunch of JavaScript and PHP libraries? Well they do have a pretty sweet little color library for finding color combinations, all you do is select the color and they show you some nice combos that work with that color.

5) script.aculo.us - Provides you with easy-to-use, compatible and, ultimately, totally cool JavaScript libraries to make your web sites and web applications fly, Web 2.0 style. I’m sure I’m not alone when I say this library is my favorite.

6) Mochikit - A kick-ass lightweight JavaScript library that will help you get shit done fast.

7) Dynamic Drive CSS Library - Here you’ll find original, practical CSS codes and examples such as CSS menus to give your site a visual boast.

8) PEAR - A framework and distribution system for reusable PHP components. PEAR provides the above mentioned PHP components in the form of so called “Packages”.

9) DHTML Goodies - A good sized library of DHTML and AJAX scripts.

10) dojo - Open source JavaScript toolkit that makes professional web development better, easier, and faster.

Honorable Mentions

11) Cross Browser | Toys - Huge JavaScript library.

12) Yahoo UI Library - The Yahoo! User Interface (YUI) Library is a set of utilities and controls, written in JavaScript, for building richly interactive web applications using techniques such as DOM scripting, DHTML and AJAX. The YUI Library also includes several core CSS resources.

Big thanks to all of those who have help in anyway to put one of these libraries together.

I hope you find this list helpful. Keep in mind there’s hundred of libraries available online, I don’t know all of them and I’m sure I missed a few good ones, feel free to add your favorites in the comments below.

Friday, November 24, 2006

Want to Learn that How to Make Money By Blogging !!!!!

There are two major types of business models that entrepreneurs use to make money blogging. The first and most common way to turn a blog into a profit making machine is to sell advertising to different companies and brands who want to reach that blog’s readers. The second kind of money making blog is one that helps a single brand improve its image by creating positive associations between the blog and the product in the mind of consumers. Both kinds of blogs can make a lot of money, especially if the creator has a keen mind for marketing.

If you are blogging with the goal of selling advertising, there are two basic ways that you can go about recruiting sponsors who want to put ads on your site; you can let someone else do all of the legwork, or you can do the work yourself and keep all of the revenue.
Within the first group, many people make money blogging by selling space through Google’s AdSense program. The advantages of this program are numerous, as it requires very little effort on the part of the blogger or webmaster to begin raking in profits. However, most people discover that they make less money through this method than they had hoped that their blog would earn.

Selling advertising directly to companies who want to put banner ads or sponsored links on your blog can take quite a bit of time, but it is often fairly lucrative. If you have a lot of contacts in industries that are related to the topic of your blog, you may want to try to go this route. People who have a strong background in sales and are experienced at pitching proposals can make quite a bit of money by renting blog space to interested companies.
The most serious problem with this model is that you often have to build quite a sizable readership before you can attract advertisers, which can mean that you have to do several months of work before you start to make money blogging.

As blogging becomes a more and more lucrative business, a lot of established companies are considering how they can get into the action. One way that companies are capitalizing on the blog movement is by having blogs that provide a kind of friendly face for their corporation. Often, a company will employ an established blogger to create a weblog designed specifically to appeal to that company’s customers and to create positive associations with the brand in consumers’ minds. More than one writer who never
even dreamed that he or she could make money blogging has been approached by a company and offered quite a pretty penny for this kind of gig.



Income Streams for Bloggers

Advertising Programs - Perhaps the most obvious changes in the past few months have been with the addition of a variety of viable advertising options for bloggers. No longer are bloggers only presented with the Adsense and/or BlogAds choice - instead they now have a massive array to choose from. Getting the most publicity recently have been Chitika’s eMiniMalls of course but there are just so many other options now that also include:

Adgenta, CrispAds, Text Link Ads, Intelli Txt, Peak Click, DoubleClickTribal Fusion, Adbrite, Clicksor, Industry Brains, AdHearUs, Kanoodle, AVN, Pheedo, Adknowledge, YesAdvertising, RevenuePilotTextAds, SearchFeed, Target Point, Bidvertiser, Fastclick Value Click and OneMonkey (to name just some of the options - I’m sure I’ve forgotten some) and there is a smorgasbord of options. Of course there is more to come with MSN Adcenter and YPN both in beta testing and with a variety of other advertising system currently in development (so I hear).

RSS Advertising - The past 12 months have seen some advances in RSS Advertising also. I’m yet to hear of any bloggers making big dollars through it to this point - but as improvements are made to the ad programs exploring this I’m sure we’ll start to see examples of it being profitable.

Sponsorship - In addition to the array of advertising programs that are available to join there is a growing awareness in the business of the value and opportunity that exists for them to advertise directly on blogs. I’m hearing more and more examples of this and have been fortunately to have a couple of ad campaigns of my own in the past month - one with Adobe a couple of weeks ago and another just completed with Ricoh for a new digicam over at my Digital Camera Blog. These are not isolated cases - as I say I know of many blogs exploring sponsorship with advertisers at present and suspect we’ll see more of it in the year ahead. Sponsorship is also happening on a post by post basis with some bloggers being paid to write on certain topics by companies - either in one off or a regular fashion.

Affiliate Programs - There are larger affiliate programs like Amazon, Linkshare, Clickbank and Commission Junction but also literally thousands of others from the large to the very small.

Blog Network Opportunities - with the rise in popularity of Blog Networks - bloggers are also being presented with more places to earn an income from their blogging - by writing for and with others. While it might be difficult to get a writing gig with one of the bigger networks - there are plenty who are always asking for new bloggers to join and who are willing to pay bloggers using a variety of payment models. While there are distinct advantages of blogging for yourself - blogging for an established network who will handle a lot of the set up/promotion/admin/SEO etc has it’s advantages also. More and more bloggers are combining writing for themselves on their own blogs with taking on blog network blogs as additional income streams.

Business Blog Writing Opportunities - as blogging has risen in it’s profile as a medium more and more businesses are starting blogs. Many of these companies have internal staff take on blogging duties - but an increasing number of them are hiring specialist bloggers to come on and run their blogs. I know of a number of bloggers who in the past month or two have been approached for such paid work. Check out Bloggers for Hire if you’re looking for this type of work.

Non Blogging Writing Opportunities - Also becoming more common are bloggers being hired to write in non blogging mediums. Manolo’s recent coup of a column in the Washington Post is just one example of this as bloggers are increasingly being approached to write for newspapers, magazines and other non blog websites. Along side this is the rise of bloggers as published book authors - this is to the extent that one blogger I spoke with this week complained to me that they were one of the few bloggers than they knew who didn’t have a book deal!

Donations - Tip Jars and donation buttons have been a part of blogging for years now but this last year saw a number of bloggers go full time after fundraising drives. Perhaps the most high profile of these was Jason Kottke of kottke.org who through the generosity of his readership was able to quit his job and become a full time blogger.

Flipping Blogs - Also more common in 2005 was the practice of ‘Blog Flipping’ - or selling of blogs. This has happened both on an individual blog level (I can think of about 20 blogs that sold this year) but also on a network level (the most obvious of these being the 8 figure sale of Weblogs Inc to AOL).

Merchandising - My recent attempt to sell ProBlogger.net T-shirts wasn’t a raging success, but it is an example of how an increasing number of bloggers are attempting to make a few extra dollars from their blogs by selling branded products through programs like Cafepress (although I have to say they’ve lost one of my own orders and are being quite unresponsive to my requests to follow it up at present). While I didn’t have a lot of success with merchandising - quite a few larger blogs are seeing significant sales - especially blogs with a cult following. I’m not at liberty to discuss details - but I know of one largish blog which will see sales over $20,000 in merchandise for the calendar year of 2005.

Consulting and Speaking - While it has been popular for established consultants to add blogs to their businesses we’re also starting to see bloggers with no consulting background earning money by charging readers for their time in consulting scenarios BECAUSE of the profile that their blogs have built them. Blogging has the ability to establish people as experts on niche topics and we all know the value of being perceived as an expert. I spoke to one blogger last month who charges himself out at over $200 an hour for speaking and consulting work - his area of expertise was something that he knew little about 18 months ago - but through his blog he’s become a leader in his field and a minor celebrity in his industry.

As time rolls on there are more and more blog earning opportunities opening up. Feel free to suggest your own ideas in comments below.

Thursday, November 23, 2006

Open source databases is "SIXTY%" cheaper !!!!!

You know .....that Open source databases can save enterprises up to 60 per cent over proprietary products, according to data collected by
recent searches.

A senior analyst at famous database management systems, estimated that average savings on the total cost of ownership are about 50 per cent. The data is based on surveys and customer interviews.

Open source databases such as Enterprise DB, Ingres and MySQL do not carry licence fees, and management tools. Soit is less expensive than for proprietary databases from Oracle, Microsoft and IBM.

Open source offers especially their proprietary competitors in low-end applications with databases of less than 200GB in size outshininigly .

The one fact os this research is that "Eighty per cent of the applications typically use only 30 per cent of the features found in commercial databases," and "The open source databases deliver those features today."

But the hitch is that open source databases generally lack the features for mission critical applications, trailing behind their proprietary peers in security, uptime, performance and features such as XML support.

Enterprise applications from Oracle and SAP also do not support open source databases today, but right now condition expects that to change "within a couple of years".

Open source database vendors typically do not position their products as low-cost alternatives.

But customers still consider price as the primary benefit of open source, ya this is fact.

"The number one reason why any customer would choose an open source database is cost. That still holds true today" .

But the low price is also enabling companies to set up new projects that would previously have been too expensive, such as data mining of log files and setting up data repositories.

In an attempt to the competition from low-cost open source databases, Oracle launched a free database last year that is essentially a scaled down version of its enterprise grade Oracle Database 10g.

The application targets test deployments for developers and students rather than enterprises.

Tuesday, November 21, 2006

Web 2.0 and the AJAX

Web 2.0 is a strange thing in that it doesn't really exist. You can't buy Web 2.0; you can't buy a Web 2.0 programming language, and you can't buy Web 2.0 hardware. In many ways, the phrase "Web 2.0" is a marketing phrase like "paradigm shift" or "the big picture". The reason for this vagueness is that Web 2.0 doesn't have a tightly defined definition. What the phrase Web 2.0 tries to express is, that modern websites are so much better than early websites that they'd better be given a different name. So it is down to marketing.

Web developers need to demonstrate that they may use the same Internet, the same web browsers and the same web servers as their competitors, yet their websites are in fact an order of magnitude better. “Our competitors only do websites. We do Web 2.0 websites!"

The client is, of course, hugely impressed that his new website will be a Web 2.0 website. But what should he expect to see for his money? What is the client's view of what Web 2.0 should offer? Is it all smelling of roses or are there some thorny issues too?

I propose that there are in fact three facets to a Web 2.0 website:

1. AJAX

2. Social Networking (Building Communities)

3. Broadband

AJAX is technical and can only be performed by a technically skilled developer, social networking is vague, woolly and is based more on marketing models than web skills, and broadband has been popular for a long time. Even stranger is the fact that AJAX has been available to developers for at least 5 years, and social networking has been around even longer. It is simply the re-branding of these things that is causing the rise in the popularity of these old but current "buzzword" technologies.

AJAX is a mash up of technologies. We've had asynchronous JavaScript and XML for many years, but until somebody said "I name this mash up - AJAX" it remained out of the mainstream. The same goes with social networking. Forums, blogs, and community-based websites have been around for many years, but giving it a title like "social networking" combined with the success of websites such as www.Youtube.com and www.Linkedin.com makes it mainstream and popular. And to cap it all, the new names invented to re-brand existing technologies are combined into the all encompassing name of Web 2.0. Web 2.0 is simply rebranding the rebranded.

In summary, we've had the ability to create Web 2.0 websites for years. It is not new technology; it is simply the renaming and repackaging of something we already have and enjoy. Marketing has made buzzwords of what we already knew and the public and developers are lapping it up.

The third facet of Web 2.0 was broadband, or as I prefer to call it, broadband abuse. Many developers believe that Web 2.0 is defined by how long it takes to download a website or the size of the broadband connection required to view the site comfortably. They believe that the bigger the connection required or the longer the website takes to download, the more Web 2.0ish the website must be. In my opinion, however, adding vast images, video footage, badly implemented rounded corners and streaming music does not make a Web 2.0 website. It simply makes a regular website that is bloated and annoying.

Presuming that you understand what makes a Web 2.0 website and you are keen to build one, there is an important area that you should consider before you start. And that is the area of Search Engine Optimisation.

So what about search engines? Do Web 2.0 websites perform well on search engines? Do search engines need to change to keep pace with development? If we ignore the broadband abusers and look at the two key facets of Web 2.0, AJAX, and social networking we get two very different answers.

Working somewhat in reverse here, the conclusion is that AJAX is a search engine killer. Adding AJAX functionality to your website is like pulling the plug on your search engine strategy. Social networking sites on the other hand typically perform exceptionally well on search engines due to their vast amount of visitor provided content.

The reason AJAX is a search engine killer is pretty obvious once you know how the technology works, and at the risk of offending all the people who know this already, I'll recap in a brief paragraph.

Simply put, AJAX removes the need to refresh a page in a browser. Say for example, you are on the product-finding page of a website, you can type in a search phrase for the product you want to find and press the submit button. Without refreshing the page, the asynchronous JavaScript runs off, grabs the results of the search, and inserts the details of the found products into the very same page as you sit and look at it.

For the website user this addition of AJAX to the website feels fantastic. No page reloads, no browser flicker, no click noise, but sheer joy. And so the rush for AJAX websites begins, because the visitors will love it.

But what about the search engines, what will they make of web pages that use AJAX to find content? Importantly, search engines don't run JavaScript. Oh no, not ever, no way José. So the search engine will never run your AJAX. To the search engine, huge areas of your website content are now hidden, never to be spidered, indexed, or found. This really limits the usefulness of AJAX in many applications.

An ideal application of AJAX is Google Maps, where as you drag the map around the browser window, the newly exposed areas of the map are retrieved and shown on the page without a page refresh—smooth, seamless, and very impressive. Does Google care if the single map page gets found by searching? Certainly not!

A very poor application of AJAX is the product portfolio where you can find and view product details for hundreds of products without ever refreshing the page. Nice to use? Yes. Navigation friendly? No—try hitting the back button when the browser ignores your last 20 clicks because you have remained on the same page! Search engine friendly? Forget it. You are invisible.

So what is the solution to the AJAX invisibility cloak that Master Harry Potter himself would be proud of? There are 5 options:

  1. Build two websites, one using AJAX that is lovely for visitors and another using more traditional techniques for search engine spiders to find. If you can find a client to finance both, you have found a client with too much money!
  2. Drop AJAX. Let the visitors suffer the page refresh.
  3. Run with AJAX anyway and just put up with the fact that your perfectly formed website will receive no search engine visitors.
  4. Lobby the major search engines to rebuild their spidering algorithms to take into account AJAX pages and to run JavaScript on the pages they index. This option might take some time :-)
  5. Increase your Google Ad words payments and ramp up traditional advertising to counteract the missing website traffic from the search engines.

And so, a bleak picture of AJAX is painted and by implication of Web 2.0 as well. The good applications of AJAX and Web 2.0 are few and far between, but when you do find them they are fantastic. Do you remember that feeling when you fist used Google Maps? Do you find that all other mapping websites now feel old fashioned? I would go as far as to say that it was Google Maps that single-handedly bought the technology of AJAX to the masses.

The second most impressive application of AJAX is another Google idea, where when typing in the search field on the Google website, AJAX is used to find results even as you type the words—incredibly quick to use, fantastic for the website visitor, and really demonstrating the technology in a great light.

Isn't it hugely ironic then that the one website that demonstrates so well the very technology that, if used on our own websites, will force us to spend more on Google Ad words, is in fact Google.


Tuesday, November 14, 2006

2006 Open Source Content Management System Award Winner Announced

Following a nomination round and eight weeks of voting, Packt are pleased to announce Joomla! as the winner of the 2006 Open Source Content Management System Award. Joomla! collects a first prize of $5,000 and the title of Packt Open Source CMS Award Winner for 2006.

The final result, as voted for by judges from The Open Source Collective, MySQL, the Eclipse Foundation, and 16,000 users on www.PacktPub.com saw a tie for first place between Joomla! and Drupal. In the event of a tie, a fourth independent judge would be brought in. This was Apoorv Durga who is a member of CMSPros and runs his own blog [http://apoorv.info/] on portals and content management. This crucial vote ended up with Joomla! triumphing over Drupal by one point.

The final result was as follows:

1. Joomla!- $5,000
2. Drupal - $3,000
3. Plone - $2,000

Please note that in deciding the final positions judges were asked to give their top three, with their first choice receiving 3 points, second receiving two points and third place one point.

Choosing the top three proved to be a difficult experience for the judges, due to the quality of the finalists and their ability to suit different tasks depending on the objective of the user. "All the CMS’s that made the top 5 are very good" declared Scott Goodwin, representing The Open Source Collective, "and I wouldn't hesitate to use any of them depending on what I'm trying to accomplish."

The judges had strong compliments for all five finalists, with some of the highlights listed below:

Drupal

* Has been around for quite some time and is stable and actively developed
* Well coded and has an available granular permissions system and a strong eye for security
* Configuration was a breeze
* Lightweight installation
* Plethora of modules and themes
* Exceptional documentation and has an active and friendly community
* The node concept is very good


e107

* Easy to setup and install
* Wide selection of themes and modules
* Provides lots of flexibility
* Backend seems well put together
* Drop down menu is a nice touch and is organized well


Joomla!

* Very easy to install and use with lots of extensions and modules
* The documentation is exhaustive and concise
* Admin user interface is intuitive and powerful
* The backend of Joomla! is very usable and the WYSIWYG editor the content was nice
* Seems like it would scale well and provides a lot of customization options
* Large and active community


Plone

* Very flexible and powerful
* Great user interface
* Very clean default installation
* Lots of addon modules
* Worth taking the steep learning curve
* Impressed with the customization it offers
* Integration with LDAP or other login systems is a plus


XOOPS

* Minimalist initial install
* Great community support
* Provides lots of addon modules and themes
* Lots of functionality
* Mature and has a very good permissions system

Monday, November 13, 2006

W3C DOM -Introduction

The Document Object Model (DOM) is the model that describes how all elements in an HTML page, like input fields, images, paragraphs etc., are related to the topmost structure: the document itself. By calling the element by its proper DOM name, we can influence it.

On this page I give an introduction to the W3C Level 1 DOM that has been implemented in the newest generation of browsers. It will give you an overview of how the DOM works and what you can do with it.

First some words about the DOM Recommendation and the purpose of the DOM, then I teach you what nodes are and how you can walk the DOM tree from node to node. Then it's time to learn how to get a specific node and how to change its value or attributes. Finally, I'll teach you how to create nodes yourself, the ultimate purpose of the new DOM.

The Recommendation

The Level 1 DOM Recommendation has been developed by the W3C to provide any programming language with access to each part of an XML document. As long as you use the methods and properties that are part of the recommendation, it doesn't matter if you parse an XML document with VBScript, Perl or JavaScript. In each language you can read out whatever you like and make changes to the XML document itself.

As some of you might have guessed, this paragraph describes an ideal situation and differences (between browsers, for instance) do exist. Generally, however, they're far smaller than usual so that learning to use the W3C DOM in JavaScript will help you to learn using it in another programming language.

In a way HTML pages can be considered as XML documents. Therefore the Level 1 DOM will work fine on an HTML document, as long as the browser can handle the necessary scripts.

You can read out the text and attributes of every HTML tag in your document, you can delete tags and their content, you can even create new tags and insert them into the document so that you can really rewrite your pages on the fly, without a trip back to the server.

Because it is developed to offer access to and change every aspect of XML documents, the DOM has many possibilities that the average web developer will never need. For instance, you can use it to edit the comments in your HTML document, but I don't see any reason why you would want to do so. Similarly, there are sections of the DOM that deal with the DTD/Doctype, with DocumentFragments (tiny bits of a document) or the enigmatic CDATA. You won't need these parts of the DOM in your HTML pages, so I ignore them and concentrate instead on the things that you'll need in your daily work.

Nodes

The DOM is a Document Object Model, a model of how the various objects of a document are related to each other. In the Level 1 DOM, each object, whatever it may be exactly, is a Node. So if you do

 
This is a paragraph

you have created two nodes: an element P and a text node with content 'This is a paragraph'. The text node is inside the element, so it is considered a child node of the element. Conversely, the element is considered the parent node of the text node.

              

<-- element node
|
|
This is a paragraph <-- text node

If you do

This is a paragraph


the element node P has two children, one of which has a child of its own:

              


|
--------------
| |
This is a
|
|
paragraph

Finally there are attribute nodes. (Confusingly, they are not counted as children of an element node. In fact, while writing this pages I've done a few tests that seem to indicate that Explorer 5 on Windows doesn't see attributes as nodes at all.) So

This is a paragraph


would give something like

              

----------------
| |
-------------- ALIGN
| | |
This is a |
| right
|
paragraph

So these are element nodes, text nodes and attribute nodes. They constitute about 99% of the content of an HTML page and you'll usually busy yourself with manipulating them. There are more kinds of nodes, but I skip them for the moment.

As you'll understand, the element node P also has its own parent, this is usually the document, sometimes another element like a DIV. So the whole HTML document can be seen as a tree consisting of a lot of nodes, most of them having child nodes (and these, too, can have children).

                          |
|-------------------------------------
| |

---------------- lots more nodes
| |
-------------- ALIGN
| | |
This is a |
| right
|
paragraph

Walking through the DOM tree

Knowing the exact structure of the DOM tree, you can walk through it in search of the element you want to influence. For instance, assume the element node P has been stored in the variable x (later on I'll explain how you do this). Then if we want to access the BODY we do

x.parentNode

We take the parent node of x and do something with it. To reach the B node:

x.childNodes[1]

childNodes is an array that contains all children of the node x. Of course numbering starts at zero, so childNodes[0] is the text node 'This is a' and childNodes[1] is the element node B.

Two special cases: x.firstChild accesses the first child of x (the text node), while x.lastChild accesses the last child of x (the element node B).

So supposing the P is the first child of the body, which in turn is the first child of the document, you can reach the element node B by either of these commands:

document.firstChild.firstChild.lastChild;
document.firstChild.childNodes[0].lastChild;
document.firstChild.childNodes[0].childNodes[1];
etc.

or even (though it's a bit silly)

document.firstChild.childNodes[0].parentNode.firstChild.childNodes[1];

Getting an element

However, walking through the document in this way is quite cumbersome. You'll need to be absolutely certain of the structure of the entire DOM tree and since the whole purpose of the Level 1 DOM is to allow you to modify the DOM tree, this could lead to problems really quickly.

Therefore there are several ways of jumping directly to an element of your choice. Once you have arrived there, you can walk the last bit of the DOM tree to where you want to be.

So let's continue with our example. You want to access the element node B. The very simplest way is to directly jump to it. By the method document.getElementsByTagName you can construct an array of all tags B in the document and then go to one of them. Let's assume that this B is the first one in the document, then you can simply do

var x = document.getElementsByTagName('B')[0]

and x contains the element node B. First you order the browser to get all elements B in the document (document.getElementsByTagName('B')), then you select the first of all B's in the document ([0]) and you've arrived where you want to be.

You could also do

var x = document.getElementsByTagName('P')[0].lastChild;

Now you go to the first paragraph in the document (we assume that our P is the first one) and then go to its lastChild.

The best way, the only way to be certain that you reach the correct element regardless of the current structure of the DOM tree, is to give the B an ID:

This is a paragraph


Now you can simply say

var x = document.getElementById('hereweare');

and the element node B is stored in x.

Changing a node

Now that we have reached the node, we want to change something. Suppose we want to change the bold text to 'bold bit of text'. We then have to access the correct node and change its nodeValue. Now the correct node in this case is not the element node B but its child text node: we want to change the text, not the element. So we simply do

document.getElementById('hereweare').firstChild.nodeValue='bold bit of text';

and the node changes. Try it and change it back again .

This is a paragraph

You can change the nodeValue of each text node or each attribute. Thus you can also change the ALIGN attribute of the paragraph. Try it and change it back again .

This, too, is quite simple. Take the node you need (the B's parentNode, in this case), then use the setAttribute() method to set the ALIGN attribute to the value you want:

function test2(val) {
if (document.getElementById && document.createElement)
{
node = document.getElementById('hereweare').parentNode;
node.setAttribute('align',val);
}
else alert('Your browser doesn\'t support the Level 1 DOM');
}

Creating and removing nodes

Changing nodes is nice, it can even be useful, but it's nothing compared to actually creating your own nodes and inserting them into your document. I can easily add an HR right below this paragraph and remove it quite as easily.

Creating the element is done by a special method:

var x = document.createElement('HR');

Thus an HR is created and temporarily stored in x. The second step is to insert x into the document. I wrote a special SPAN with ID="inserthrhere" at point where it should appear. So we use the appendChild() method on the SPAN and the HR is made a child of the SPAN and it magically appears:

document.getElementById('inserthrhere').appendChild(x);

Removing it is slightly more complex. First I create a temporary variable node to store the SPAN, then I tell it to remove its first child (the HR).

var node = document.getElementById('inserthrhere')
node.removeChild(node.childNodes[0]);

In the same way we can create a new text node and append it to our faithful B ID="hereweare"

var x = document.createTextNode(' A new text node has been appended!');
document.getElementById('hereweare').appendChild(x);

Try it , then go up to see the result. You will notice that executing the old functions does not remove the new text node, that's because it has become a separate node:

           
|
------------
| |
paragraph A new text node
has been appended!

(To merge them into one node use the normalize() method that's sadly not supported by Explorer 5 on Windows).

I won't tell you how to remove the text node, try writing that script yourself. It'll be a useful exercise.

Level 0 DOM


The Document Object Model (DOM) is the model that describes how all elements in an HTML page, like input fields, images, paragraphs etc., are related to the topmost structure: the document itself. By calling the element by its proper DOM name, we can influence it.

This page treats some DOM history and then describes the Level 0 DOM.

First of all a little introduction to the Document Object Model, followed by a bit of history. Then we'll take a look at accessing elements through the Level 0 DOM and how to use the Level 0 DOM.

Document Object Model

The Document Object Model has been around since browsers support JavaScript. From Netscape 2 onwards, web programmers wanted to access bits of HTML and change its properties. For instance, when you write a mouseover script you want to go to a certain image on the page and change its src property. When you do this, the browser reacts by changing the image on the screen.

The function of the Document Object Model is to provide this kind of access to HTML elements on your page. Exactly what elements you can access in which way and exactly what you can change depends on the browser. Each higher browser version gives you more freedom to reach any element you like and change anything you like.

DOM history

There are three DOM levels:

  1. The Level 0 DOM, supported from Netscape 2 onwards by all browsers.
    This DOM is treated below on this page.
  2. The two Intermediate DOMs, supported by Netscape 4 and Explorer 4 and 5.
    Note that the use of these DOMs is not necessary any more; I advise you to ignore them. These DOMs are treated on the archived Intermediate DOMs page.
  3. The Level 1 DOM, or W3C DOM, supported by Mozilla and Explorer 5.
    This DOM is treated in its own section .

Now let's take a look at the origins and development of the Document Object Model.

Level 0 DOM

The Level 0 DOM was invented by Netscape at the same time JavaScript was invented and was first implemented in Netscape 2. It offers access to a few HTML elements, most importantly forms and (later) images.

For reasons of backward compatibility the more advanced browsers, even those who support the Level 1 DOM, still also support the old, faithful Level 0 DOM. Not supporting it would mean that the most common scripts suddenly wouldn't work any more. So even though the Level 0 DOM doesn't entirely fit into the new DOM concepts, browsers will continue to support it.

For the same reason Microsoft was at first forced to copy the Netscape DOM for Explorer 3. They wanted a real competitor for Netscape and having it produce lots of error messages on every page that contained JavaScript would have been strategically unsound.

Therefore the Level 0 DOM is really unified: all browsers that support parts of it support these parts in the same way. With the later DOMs this situation changed.

Intermediate DOMs

When the Version 4 browsers were released, the hype of the day was DHTML so both browsers had to support it. DHTML needs access to layers, separate parts of a page that can be moved across the page. Not surprisingly in view of their increasing competition, Netscape and Microsoft chose to create their own, proprietary DOMs to provide access to layers and to change their properties (their position on the page, for instance). Netscape created the layer model and the DOM document.layers, while Microsoft used document.all. Thus a proper cross-browser DHTML script needs both intermediate DOMs .

Fortunately, nowadays these intermediate DOMs are not important any more. You can safely forget them.

Level 1 DOM

Meanwhile W3C had developed the Level 1 DOM specification. The Document Object Model W3C proposed was at first written for XML documents, but since HTML is after all a sort of XML, it could serve for browsers too.

Besides, the Level 1 DOM is a real advance. For the first time, a DOM was not only supposed to give an exact model for the entire HTML (or XML) document, it is also possible to change the document on the fly, take out a paragraph and change the layout of a table, for instance.

Since both Netscape and Microsoft had participated in the specification of the new DOM, since both browser vendors wanted to support XML in their version 5 browser and since public pressure groups like the Web Standards Project exhorted them to behave sensibly just this once, both decided to implement the Level 1 DOM.

Of course, this doesn't mean that Mozilla and Explorer 5 are the same. Again for reasons of backward compatibility Microsoft decided to continue support of document.all so that Explorer 5 now supports two DOMs (three if you count the Level 0 DOM).
On the other hand, the core of Mozilla is being built by the open source Mozilla Project and the leaders of this project have decided to completely ditch the old document.layers DOM of Netscape 4 and have Mozilla support only the Level 1 DOM.

See the Level 1 DOM page for more information.

Accessing elements

Each DOM gives access to HTML elements in the document. It requires you, the web programmer, to invoke each HTML element by its correct name. If you have done so, you can influence the element (read out bits of information or change the content or layout of the HTML element). For instance, if you want to influence the image with name="first" you have to invoke it by its proper name

document.images['first']

and you are granted access. The Level 0 DOM supports the following nodeLists:

  • document.images[], which grants access to all images on the page.
  • document.forms[], which grants access to all forms on the page.
  • document.forms[].elements[], which grants access to all form fields in one form, whatever their tag name. This nodeList is unique to the Level 0 DOM; the W3C DOM does not have a similar construct.
  • document.links[], which grants access to all links () on the page.
  • document.anchors[], which grants access to all anchors () on the page.

How to use the Level 0 DOM

When the browser concludes that an HTML document has been completely loaded, it starts making arrays for you. It creates the array document.images[] and puts all images on the page in it, it creates the array document.forms[] and puts all forms on the page in it etc.

This means that you now have access to all forms and images, you just have to go through the array in search of the exact image or form that you want to influence. This can be done in two ways: by name or by number.

Suppose you have this HTML document:

-------------------------------------------
| document |
| -------- ------------------- |
| |img | | second image | |
| -------- | | |
| ------------------- |
| ------------------------------------- |
| | form | |
| | --------------------- | |
| | | address | | |
| | --------------------- | |
| ------------------------------------- |
-------------------------------------------

document.images

The first image has name="thefirst", the second has name="thesecond". Then the first image can be accessed by either of these two calls:

document.images['thefirst']
document.images[0]

The second one can be accessed by either of these calls:

document.images['thesecond']
document.images[1]

The first call is by name, simply fill in the name (between quotes, it's a string !) within the [] and you're ready.

The second call is by number. Each image gets a number in the document.images array, in order of appearance in the source code. So the first image on a page is document.images[0], the second one is document.images[1] etc.

document.forms

The same goes for forms. Suppose the form on the page has name="contactform", then you can reach it by these two calls:

document.forms['contactform']
document.forms[0]

But in the case of forms, usually you don't want to access just the form, but a specific form field. No problem, for each form the browser automatically creates the array document.forms[].elements[] that contains all elements in the form.

The form above holds as first element an . You can access it by these four calls:

document.forms['contactform'].elements['address']
document.forms['contactform'].elements[0]
document.forms[0].elements['address']
document.forms[0].elements[0]

These four calls are completely interchangeable, it's allowed to first use one, then another. It depends on your script exactly which method of access you use.

Doing what you need to do

Once you have correctly accessed a form field or an image through the Level 0 DOM, you'll have to do what you want to do. Images are usually accessed to create a mouseover effect that changes the property src of an image:

document.images['thefirst'].src = 'another_image.gif';

Usually you want to access forms to check what a user has filled in. For instance to read out what the user has filled in check the property value:

x = document.forms[0].elements[0].value;

and then check x for whatever is necessary. See the introduction to forms for details on how to access specific form fields (checkboxes, radio buttons etc.).

Ashko

Thursday, November 09, 2006

How do I find the geographical location of a host, given its IP address ?

In general, it is impossible - IP addresses are allocated arbitrarily, as there's no inherent connection between an IP address and it's physical location, and there's no reliable method to do the trick.

Yet, doing some detective work could help. Try following methods :




  1. Note the following links for reference :



    A complete list of country codes

    http://www.iana.org/domain-names.htm

    http://www.ics.uci.edu/pub/websoft/wwwstat/country-codes.txt




    A complete list of U.S. state abbreviation

    http://www.usps.gov/ncsc/lookups/abbr_state.txt



    A complete list of airport codes

    http://www.aviationjobsonline.com/airports/citycode.html



    Microsoft's TerraServer - satellites pictures of geographical areas

    http://www.terraserver.microsoft.com/



  2. Use reverse DNS to find out the host's name. This item could supply some clues that could help.




    E.g. given the IP address 132.74.18.2, the command 'nslookup 132.74.18.2' translates the address to construct.haifa.ac.il gives two hints -


    1. The TLD is .il, which hints the host is in Israel.
    2. The next two domains are haifa.ac, so this host belongs to the 'haifa' academical institute (a university, in this case). The Haifa university happens to be in the city Haifa.



    Reverse DNS translation doesnt always work - it depends on the host's [the host with the given IP address] DNS server's correct configuration.



    Another trick is to execute a whois request on the IP address. Try to direct the whois query to whois.arin.net - if it doesn't have the reply it will tell you to query either whois.apnic.net or whois.ripe.net



    Notice that a host in one domain might be hosted in another country. This is due to both virtual hosting, where a domain of a company from one country or region, might be hosted where hosting is cheap.



    Also notice that the .org, .com, and even .edu domains does not imply the host is in the U.S., as many of those domains belong to companies that are either not U.S. based, or are international, and might have some hosts all over the world.



  3. Some hosts support a DNS extension which allows for hosts to enter their geographical location into their DNS record, based on an extension to DNS described in RFC 1876.




    For further information see - http://www.ckdhr.com/dns-loc/



    Another attempt to express a host's geographical location via DNS is done in RFC 1712. Both RFCs define a DNS Resource Record to contain the geographical location.



  4. Visit the host's web server. A web site will sometimes contain hints regarding the site's location.



    E.g. for construct.haifa.ac.il, you can find info at both http://www.haifa.ac.il/ and http://www.ac.il/




  5. Use whois. The whois database contains administrative contact info for all domains, filled in during domain registration time, and updated from time to time. This admin info could give some hints.



    The whois database is not highly reliable - if an address belongs to a large & responsible company, the company will supply reliable info and update it, but as domain name registrators do not insist on keeping the database accurate and current, the data might be incorrect.



    The IP to Lat/Long page will attempt to display the same information in a graphical representation.

    http://cello.cs.uiuc.edu/cgi-bin/slamm/ip2ll/



    The Allwhois.com page allows whois requests for many countries.

    http://www.allwhois.com/




    A list of whois servers, collected by Matt Power, is available at ftp://sipb.mit.edu/pub/whois/whois-servers.list



    Note that the data is usually given for the owners' main branch or contact points, but the IP addresses might be allocated to hosts that may be located at a different location(s).



  6. Use traceroute. The names of the routers through which packets flow from your [or any] host to the host with the given IP address might hint at the geographical path which the packets follow, and at the final destination's physical location.




    E.g. > traceroute www.mit.edu
    traceroute to DANDELION-PATCH.MIT.EDU(18.181.0.31), ...
    1 teg.technion.ac.il (132.68.7.254) 2 ms 1 ms 1 ms
    2 tau-smds.man.ac.il (128.139.210.16) 5 ms 5 ms 5 ms
    3 128.139.198.129 (128.139.198.129) 9 ms 11 ms 13 ms
    4 TAU-shiber.route.ibm.net.il (192.115.73.5) 535 ms 549 ms 513 ms
    5 fe7507.tlv.ibm.net.il (192.116.177.1) 562 ms 596 ms 600 ms
    6 165.87.220.18 (165.87.220.18) 1195 ms1204 ms
    7 nyc28-16-sar1.ny.us.ibm.net (165.87.28.19) 1208 ms1216 ms1233 ms
    8 198.133.27.5 (198.133.27.5) 1210 ms1239 ms1211 ms
    9 sprint-nap.bbnplanet.net (192.157.69.51) 1069 ms1087 ms1122 ms
    10 nyc1-br2.bbnplanet.net (4.0.1.25) 1064 ms1109 ms1061 ms
    11 cambridge1-br1.bbnplanet.net (4.0.1.122) 1185 ms1146 ms1203 ms
    12 cambridge2-br2.bbnplanet.net (4.0.2.26) 1185 ms1159 ms1073 ms
    13 ihtfp.mit.edu (192.233.33.3) 1052 ms 642 ms 658 ms
    14 W20-RTR-FDDI.MIT.EDU (18.168.0.8) 640 ms 665 ms 674 ms
    15 DANDELION-PATCH.MIT.EDU (18.181.0.31) 702 ms 915 ms 868 ms



    The 3rd hop takes the path to the academic network [checked by local whois lookup], the fifth hop takes the path to New-York [on the east coast], and the 10th hop takes the path to Cambridge [in Massachusetts, on the coast, northern to New-York].



    There is a utility named VisualRoute (http://www.visualware.com/visualroute/index.html) which traceroutes a host, and displays the route on a map of the world. The host's location on the map is based on the whois query, which may be wrong - an Israely domain might be displayed as being in Israel though it is hosted in another country.




  7. Some of the services available on the host might give further info.



    E.g. telnet construct.haifa.ac.il 13 <== Time of day service
    Trying 132.74.18.2...
    Connected to construct.haifa.ac.il.
    Escape character is '^]'.
    Wed Jan 21 08:32:53 1998 <== Time difference hints at the
    host's time zone.



  8. Naming conventions of ISPs and back-bones



    AT&T dialups : <port>.<router-location>.<state>.dial-access.att.net




    Port is 2-254 for the dial-up ports, and 1 for the router itself. location: example: "los-angeles-2" (city and router #). state: 2-letter abbreviation.




    uu.net dialups :

    A. <port>.<device>.<city>.<state>.<iu>.uu.net


    B. <port>.<device>.<airport>.<iu>.uu.net




    iu = intended use (meaningless), state is per USPS ZIP code, deviceis Ascend 'TNT' # or Ascend 'MAX' #.


    Ashko

Build An Online Community With Drupal

A Little History

Today, there's a proliferation of tools designed with Web-based communities in mind. Some might argue that the modern community model began with the launch of Slashdot. The code to build a Slash-like community is readily available. Some might point to the Kuro5hin community as the prototype. That code, too, is available for anyone interested in community-building. With varying degrees of administrative and user ease, these tools serve as a useful means to achieve the end of community-building.

Some, myself among them, believe that the real revolution in communities is underway at this very moment. The Web merely delivers the space in which communities form. Increasingly, those communities are themselves becoming movements with greater weight and influence than ever. The American presidential campaign of 2003-2004 may years from now prove to be a watershed moment in the evolution of online communities. MeetUp and Daily Kos are motivating Web users far beyond the Web sphere. They're providing the tools and space within which users motivate themselves and each other toward political activism outside the Web place.

Need further evidence of the value of community-building in politics? Look no further than two American presidential campaigns that clearly "got it," using online community-building to form and foster a stronger political voice than ever before. Though the campaigns didn't last, the impact of the communities they built will be felt for years and elections to come.

One strong example was the Clark Community Network (CCN), a creation of Cameron Barrett and the Wesley Clark for President campaign. Based on modifications to the Kuro5hin tool, Scoop, CCN provided a true two-way communication with Clark supporters and interested observers. Users were allocated their own blog space, with entries voted down or voted to the front page by other members of the community. CCN moved through creation to rollout and more than 5000 active users in less than six weeks. At its peak, the community helped drive the overall Web traffic of the Clark '04 site to well over 175,000 unique visitors a day. That's a powerful forum for both candidate and supporters.

But the best known political community of the 2003-2004 campaign was the DeanSpace site. Based on Drupal, both DeanSpace and its sister, Dean for America, were widely recognized as the communities that put politics squarely at the center of the online map. The focus of the Dean communities was more than mere discussion -- it was mobilization in support of Howard Dean. These communities literally took the American media by storm with their reach and power. That attention, at long last, finally pushed the concept of blogs and communities to the forefront of both technical and popular media reporting.

So, what is this Drupal tool that captured the fancy of even the narrowly-focused American political media? Simply put, it's an easily-customized open source package of PHP code that you, too, can use to create an online community. Regardless of your community needs -- dog-walking, Chinese checkers, Chicago blues, or politics -- Drupal can set your community foundation in place quickly and easily. Let's set the community-values proselytizing aside for a bit, just long enough to paint the basic concepts and technologies with a hands-on brush.
Installing and Configuring Drupal

Drupal bills itself as "community plumbing." You can download the elbow joints and drain traps that are the source code of Drupal at http://drupal.org. Version 4.1.1 is the latest version, released on May 1. The requirements for installation are minimal: the Apache Web server, PHP and a MySQL database installation. These are the recommended tools, though any database server supporting the PHP PEAR libraries will do.

Prior to unpacking the source code, you'll need to check a few PHP configuration settings. In the /etc/php.ini file, assure that you have the following:

magic_quotes_gpc 0
session.save_handler user

PHP4 also provides support for RSS syndication and the Blogger API by default, via the XML extension. If you want to utilize clean URLs in your community, you'll also need to enable the mod_rewrite and .htaccess capabilities on your Apache server.

With the PHP and server configurations set, extract the source code and copy it to the DocumentRoot of the Web server:

tar -zxvf ~/source/drupal-4.1.1.tar.gz
cp -r drupal-4.1.1/* /var/www/html

Next, you'll set up the Drupal database using the mysqladmin tool. The example below uses root to create the database. Adjust your mysqladmin command accordingly. Following the example, the new database will be named "drupal".

mysqladmin -u root -p password create drupal

Now, log in to the MySQL monitor:

mysql -u root -p password

Create the Drupal user permissions on the database. In the example below, "drupal_user" is the database user for whom you're creating permissions. This user need not exist; setting the permissions will create the proper database entry. "localhost" is the local database server and "drupal_pass" is the password you're assigning to the drupal_user account.

GRANT ALL PRIVILEGES on drupal.* to 'drupal_user'@'localhost' identified by 'drupal_pass'

To finish the database setup, flush the privileges and log out of the MySQL monitor with the following commands:

flush privileges;
\q

You can now create the Drupal database tables using the create script provided in the Drupal package. You'll need to change directories into the server DocumentRoot, and execute MySQL, feeding it the database script as input:

cd /var/www/html
mysql -u drupal_user -p drupal_pass < database/database.mysql

With the Drupal database set up, it's time to dig into the configuration file for some minimal site-specific adjustments.

The Drupal installation directory contains an include subdirectory. This is where the configuration file resides. You'll need to set the database directory and the base URL of your site in this file. In your favorite text editor, set the $db_url line of includes/conf.php to:

$db_url = "mysql://drupal_user:drupal_pass@localhost";

This line sets the database directory for your installation. Next, set the base URL by editing the following line:

$base_url = "http://your.url.here";

This is the public address of your Drupal installation. With a mere thirteen small steps, your Drupal installation is browser connection-ready.

The first time you open Drupal, you'll create an account. This first account will become the administrator account for your system. As always, select the username and password carefully.

The administrative features of Drupal present a nearly dizzying array of options. In order to understand those options, it's important to first understand the underlying structural philosophy.
Understanding the Base Drupal Installation

Drupal presents every slice of content attached to the system as a node. This is, as you might have guessed, analogous to a network, in which every desktop, server and printer is a network node. In the case of Drupal, these nodes consist of anything related to the content of the site. The base nodes of the Drupal system include many pieces such as title, author, body, comments, votes, score, users, and authored on date. Other node types include polls, static pages, personal blog entries, forum topics, and book pages. Collectively, these discrete pieces form the core of the Drupal system.

In parallel to Drupal's node system is its scheme of blocks. Think of these blocks as the visible user and administrative tools. Blocks are a gateway for you (as admin) and your users to access additional tools or view information about the system. These blocks include login, navigation, most recent poll, who's online, who's new, and blogs. If you bring to your Drupal administration some PHP experience, it's an easy task to create blocks specific to the purpose of your community. In fact, the Drupal authors have provided a wealth of documentation to assist you to do so. We'll look at a few examples in a bit.

Nodes and blocks alone make Drupal a powerful, easily configurable community-building tool. But, there's yet another level to the system. The heavy-lifting behind the scenes of the Drupal installation is the module. Modules, in fact, control both blocks nodes. They extend the base functionality of the Drupal installation. Modules can be enabled or disabled and protected with user-appropriate permissions. Critical modules include:

* admin - Provides all the administrative features
* block - Controls the boxes around the main content
* blog - Creates personal blog space for registered site users
* user - Provides the ability for users to register and log in
* help - Controls a deep and very useful help system
* node - The core module that allows content submission to the site
* system - Allows full administration of the site
* profile - Provides configurable user profiles
* tracker - Tracks recent posts to the system

The combination of nodes, blocks and modules is a powerful one. It provides user and administrative granularity unsurpassed in other community tools. With PHP behind the scenes and a MySQL backend, the configuration options are nearly endless. On your first login to the newly installed Drupal system, it's well worth your while to look carefully through the administrative and configuration options. You'll be startled by how much control lies at your fingertips.
Customizing Drupal For Your Community

Exactly what type of community do you want to build? Are politics the crackers in your soup? Do you lean more toward creative activities; writing, art, music? Each community type will have a different set of feature requirements. Political communities, for example, should always provide forums, hierarchical commenting and polls. You may also want to send new post information to the blog aggregators such as weblogs.com. These are features that can be enabled as modules in Drupal.

While political communities require high levels of user interaction, a community focused on more creative endeavours might need only light commenting and the ability to collaborate on community works. In a community of this sort, "extra" features can be disabled with a single mouse-click in Drupal. It's important to clearly think through your community needs prior to building out a community site with too few or far too many features.

For all the pre-planning, the final voice on features in your community will be the users. The flexibility of Drupal and its full default set of features make it easy to fulfill the needs of those users with minimal effort.

Occasionally, though, you or your users will find a need for a feature that can't be met by the default Drupal tools. If you're proficient in PHP, you can easily design, test and add modules to your installation. The Drupal authors have provided a wealth of instruction and guidance for creating these modules. Of interest to your coding are the following:

* Writing Themeable Modules
* Writing a Node Module
* Using Profile Data and Writing Custom PHP Pages

If you're more interested in providing the features than coding them, you may find that someone else has already done the tough work. Sites providing downloadable Drupal modules include:

* Sourceforge.net
* DeanSpace
* BiteSizeInc
* Cortex Communications

Paying close attention to your user requirements and making the modifications to meet those needs will help assure the growth of your community.
Conclusion

The Web as a place? It's been that since the beginning. Email, chat, and instant messaging all contribute to the sense of place on the Web. They serve as gathering places where a broad collection of voices, given proper time and care, become one. The surge of Web communities, in wikis and blogs and political sites, is really just a return to roots. It's one that's taken even the mainstream media by storm. That surge has provided yet another glimpse of the human potential of the Web.

And it's a surge spurred, in part, by tools; tools like Drupal. With a little care and minimal configuration, you too can install the plumbing for your Web-based community.

Lessons in Javascript Performance Optimisation: 90 seconds down to 3 seconds

I've recently been optimising the guts out of a JS webapp I wrote, which was making IE crawl to a halt. I discovered this after introducing a stress-inducing data set. (Using Rails' fixtures makes light work of this; since the fixtures are Ruby templates just like the web templates, it's easy to introduce a loop to create lots of data.)

With a rather large data set (100+ items, each several fields), IE would take about 90 seconds to churn through the initial script before the user could do anything. Firefox would run the same thing in about 8 seconds, still too long for a web page, but incredibly about ten times as fast as IE. I'm wanting to avoid pagination at this stage, so first priority was to tweak performance and see if we can keep everything on the same page.

After some sophisticated profiling ((new Date()).getTime():D), the main culprit was revealed to be prototype's $$. It's a fantastic function, but if you try to grab all elements belonging to a certain class, and the DOM is really big, $$(".cssClassName") can be slow. *REALLY SLOW* in IE. Remedy:

  • Removed trivial usages of $$() - e.g. in one case, the script was using it as a simple shorthand for a couple of DOM elements, and it was easy enough to hardcode the array. i.e. $$(".instruction") becomes [$("initialInstruction"), $("finalInstruction")]. The former notation is cuter, but unfortunately impractical on a large web page.
  • Introduced the unofficial selector addon. Seems to have improved performance in more complex queries, i.e. $("#knownId .message"), but doesn't seem to have affected performance of $$(".classname").
  • Finally, I bit the bullet and scrapped $$(".classname") altogether. It's more work, but the script now maintains the list of elements manually. Whenever an element is added or removed, the array must be adjusted. Furthermore, even the initialisation avoids using $$(), thanks to some server-side generated JS that explicitly declares the initial list of elements belonging to the class (i.e. the list that would normally be returned by $$()). To do this, the following function is called from onload(), generated with RHTML.
PLAIN TEXT
JAVASCRIPT:
  1. function findAllItems() {
  2. <% js_array = @items.map { |item| "document.getElementById('item#{item.id}'),"}.join
  3. js_array = js_array[0..-2] if @items.length>0 # Drop extra comma at end -%>
  4. return [<%= js_array %>];
  5. }

The last step explicitly identifies all items in the class, removing the need to discover them by traversing the DOM. I wasn't really sure how much time it would save - after all, you still have to look the elements up in the DOM and assign them to the array. But when I tried it, the savings were supreme - on IE, from around 45 seconds to about 2 seconds.

I have also incorporated Dean Edwards' superb onload replacement to get the ball rolling before images are loaded. It's a neat trick and takes 5 minutes to refactor it in.

Wednesday, November 08, 2006

Cost of Using PHP !!!!!!

I’m not any kind of programming language zealot/fanboy - syntax is pretty much irrelevant; each to their own.

But I’ve been thinking about the bigger picture of choosing a language, from the business perspective. Sure, most languages can do the same thing, give or take. But what other (non-syntax) issues are there that can influence the ‘cost’ of adopting one language over another?

We use a mixture of languages (the best tool for the job), but I’m personally a fan of PHP, and have used it religiously for 6 or 7 years. So here are my initial thoughts on the cost of using PHP, based on this experience. I haven’t really elaborated on them, but you get the idea. What have I missed out? Do any of these make sense?

  • Recruitment. This one’s probably the most controversial. I think there’s more ‘chaff’ to filter through when recruiting/advertising for PHP developers. You could spin this into a positive, and call it ‘more choice’. But - compared to, say, advertising for a Ruby or J2EE developer, I’ve found that there are more ‘designers’ who think they can develop because they’ve written a random image function in PHP. And filtering through CVs costs money.
  • Recruitment. Conversely, this low barrier of entry (and cost) means that kids start using it when they’re young. So, arguably, an application from a 23 year old PHP developer is often better (at least, more experienced) than an application from a 23 year old VB developer. Some might elaborate on this (he says, in fear of being flamed) and propose that those people who are born developers - the best, 10x achieving people - are more likely to use PHP (or similar; Ruby, Python, Perl) over, say ASP, as they will have started programming at an early age, when their interest was first piqued. If you are 12, 13 years old and you want to start programming, it’s a lot easier to get up and running (and experimenting/learning) with PHP than ASP.NET/Visual Studio.
  • Windows. Like it or not, there are a lot of Windows boxes out there. We probably install an equal amount of our software on Windows and Linux servers. But the PHP development effort have (until recently) not taken Windows too seriously, which does increase our cost of using it in Windows environments (lack of ISAPI support was a real downer for a long time). Consistent, cross platform support is a real plus for many languages.
  • Maintenance/Debugging/IDEs. The majority of most large-scale software development effort is through maintenance. PHP has had (in the not-too-distant past) a lack of good quality debugging support (OK, there are some, but they aren’t as good as many other languages) and error handling features (e.g. Exceptions, until recently).
  • Market Perception. This shouldn’t be under-estimated. In our market, the majority of our competitors throw around terms like ‘J2EE’ and ‘.NET’ and all other kinds of frameworks. Big business loves these - they’ve heard of them, they’re trusted, they are low risk. So a PHP solution is often seen as the ‘open source’ or lesser solution, unable to contend with these big frameworks. This adds to the ‘cost’ in lost sales. I think Zend/PHP are now starting to take this very seriously.
  • Standards/Speed of change/Backwards compatibility. I’ll lump these in together, and just cite one example - XML support. Incredibly important for many web based systems, and hence you’d think it would be an important part of PHP? It was quite slow in coming, it changed over many versions, and didn’t have backwards compatibility with the earlier functions! That all adds up to extra costs (lost opportunities and additional maintenance). I’d still argue that there are some important XML features missing (I know, I know - I should shut up and develop them myself…)
  • Richness. I’m getting a bit close to the ’syntax’ thing here, but one of PHP’s killer attributes is the richness of the language - the sheer number and flexibility of its native functions and extensions. Want to do just about anything with an array? Sure, there’s probably a single function for it. Dynamically create PDFs, Images, or anything else? Yup, a couple of extra lines. This is where PHP has real value - in RAD/Agile environments.

This sounds more negative than I imagined it would have. It’s not meant to be - I’m very happy with the progress PHP has made over the last few years (finally, native Unicode support is coming!).

Ashko

Microsoft Starts to "Get" Open Source

Those who believe that Microsoft only pays lip service to Open Source may be surprised by the mega-deal Microsoft just signed with Novell, which shows that Microsoft has finally decided to come to terms with Linux.

The Novell deal is exceptionally complicated, and spans legal, technical and financial issues. But the upshot is this: There should be better interoperability between Linux and Windows as a result of it.

A lot of money will change hands. Microsoft will be paying $240 million to Novell for SUSE Linux Enterprise Server subscription certificates, for example, for Microsoft customers who want to use Linux in Windows environments. That will cover support and maintenance for Novell’s SuSE Linux Enterprise Server.

Separately, Microsoft has also launched an Open Source lab, for testing interoperability between Open Source and Windows.

So the days of a siege mentality at Microsoft when it comes to Linux and Open Source are clearly over. The real winners here are users.

Ashko