By: Lisa Hoover

Linux on Wall Street routinely draws 800 to 1,000 attendees each year and organizer Russell Flagg says that is a testimony to just how much value the Wall Street community sees in using open source alternatives to expensive and often limiting proprietary business solutions.

“We don’t just focus on Linux exclusively, though,” says Flagg. “We’ve expanded over the years to include open source on the whole. Wall Street has already made huge investements in other operating systems and legacy systems and [this conference] helps find alternative solutions that that are less expensive, more robust, and more agile. There is a continued search on the part of Wall Street to find other systems as capable as what they already have.”

Flagg says the idea for the event took shape after he and fellow organizer Pete Harris spoke to representatives from IBM, Red Hat, Intel and other companies that understand how open source products can impact the business world. “We found an untapped niche in the financial market” and the conference was born.

Presenters and speakers were carefully selected to help attendees sort through mountains of information and take useful information back to their companies. “We have a very free-form conference,” says Flagg. “We let speakers decide for themselves what they think is appropriate information rather than go through PR channels” to shape the presentations and offer actionable information.

“Seventy percent of our conference-goers are the ’suits’ — decision makers and business people. The other thirty percent are developers focused on technology. So, primarily, our speakers are the Wall Street guys who are being collegiate and sharing information with their colleagues on Wall Street. After all, Wall Street listens to Wall Street.”

Conference planners haven’t forgotten about who makes open source deployments work behind the scenes, though, and invite IT managers to attend the event as well. “IT managers are given mandates to lower costs and reduce budgets so open source is a great option. It’s a tough world out there for them — they must do more with less and still maintain capabilities in a 24/7 world. It’s a balancing act.”

Panels and speakers

Raven Zachary, Open Source Research Director for The 451 Group, will share his views on Wall Street’s adoption of open source during a panel titled “Selling Open Source To The CIO.” Zachary and fellow panel members from Oracle, Intel, and Jefferies & Company will participate in a moderated discussion of how the CIOs in the financial market perceive Linux and how to overcome their objections, something Zachary says is important for vendors in the space to remain competitive.

“The Financial Services firms are leading the enterprise adoption of open source technology, including Linux,” says Zachary. “By watching the consumption patterns of these firms, open source vendors can gain a good understanding of the types of products and services that are commercially viable.

“We’re seeing an increased effort by Financial Services services to recruit open source project developers. Bringing expertise in-house is a growing trend. The challenge is that the demand for open source talent is growing at a greater rate than the expansion of this talent pool. Developers contributing to popular open source projects are in high demand and are in a great position to obtain employment.”

Zachary says that despite the fact that Wall Street has already invested millions in proprietary hardware and software, they can easily see the value in exploring other alternatives. “Open source adoption with Financial Services is not just about cost. Standards are a big part of this value. Standards provide greater longevity, ease of future migration, and talent acquisition, to name just a few.”

Using free and open source software is not without risks however. Phil Robb, Engineering Section Manager in the Open Source and Linux Organization (OSLO) at Hewlett-Packard will give a presentation titled, “Open Source Governance: Recognizing and Dealing with the Unique Risks Associated with Free Software.” Robb will discuss best practices to help companies protect themselves from the possibility of legal and technical pitfalls associated with using open source products.

Doug Small, Director of Marketing for OSLO, says that despite the potential drawbacks of adopting open source, the financial market has demonstrated they are more than ready to incorporate it into their business models.

“Financial services companies have been early adopters of Linux. First we saw Wall Street firms begin using Linux, then banks and mutual funds. [Now] we see insurance companies beginning to Linux more broadly and we are seeing Linux used in progressively more critical deployments.

“We see a growing role for open source software beyond Linux in financial services companies and that’s why many companies are expanding the governance policies and procedures around using open source software.”

Event organizer Russell Flagg says a lot of ground will be covered during the day-long conference but he believes that, as in past years, attendees will come away with a better understanding of how Linux and open source fits into the Wall Street world. “We don’t often hear about the real-world implementations that happen as a result of this conference,” he says, “but the success of the conference each year I think is an indication that [open source] applications are definitely being considered.”

Source: Linux Community

Saved from webworkerdaily.com

Last week brought us news regarding Google’s future plans for their online application suite. At the Web 2.0 Expo CEO Eric Schmidt said Google will release a PowerPoint-type presentation application, slated for this summer. Then, VP Douglas Merrill announced on the official Google blog that the company has acquired videoconferencing software from a Swedish startup.

What else is in the works? Phil Sim of Squash makes some guesses after his participation in a survey of Google Apps Premier users. In that survey, Google explored his interest in a variety of applications. Beyond the basics already included in the suite, Google asked about project and contact management, file storage, and online discussion groups, suggesting they are thinking of incorporating these into their suite.

Combining this information, we can make some guesses at what you might find in Google Apps in the future.

1. Presentation. Through their acquisition of Tonic Systems, Google will offer an alternative to Microsoft’s PowerPoint, as well as to the many web-based presentation systems under development. That category includes SlideShare, Zoho Show, Thumbstacks, and Spresent.

2. Project management. Watch out, 37Signals: the survey Phil completed suggests that project management is on Google’s to do list, something that would likely compete directly with 37Signals’ popular Basecamp service.

3. Contact management. Gmail’s automatic creation of contacts from emails works really nicely. If you use Google Apps for your Domain, you can already share contacts across users. It’d be great to also see some Highrise-like capabilities — taking notes, tracking interactions, and managing tasks related to people you’re working with.

4. File storage and sharing. We regularly cover online file storage and sharing apps here at Web Worker Daily because it’s a core step in managing your online work. Google Blogoscoped ponders how it might look and work.

5. Online discussion groups. Google Groups already exists but it’s not tied into Google Apps. I’d like to see a unification under the Google Reader interface where you could browse your mail, RSS feeds, and relevant discussion groups all in one quick keyboard-accessible screen.

6. Wiki. Google acquired JotSpot on Halloween of 2006 and immediately closed it to new sign-ups. News has been sparse, but in January the JotSpot developers announced an upgrade for existing customers and said it will be the last version produced before migration to Google’s infrastructure. Perhaps Google will combine project management with the JotSpot wiki capabilities — wikis provide a reasonable alternative to dedicated project management apps for some teams.

7. Video chat. Google announced its acquisition of Swedish start-up Marratech’s video conferencing software, suggesting that they intend to use it internally only. No one would be surprised if Google incorporated it into the Google Talk client to support video chat, though.

8. Web meetings. Marratech offers capabilities beyond videoconferencing to include e-meetings and collaborative whiteboards along the lines of what WebEx is known for. Here’s hoping if they do offer web-based real-time meetings that it works better than WebEx.

What else would you like to see in Google’s online office suite? Check out this Google wish list discussion to get some ideas. I’m voting for online image editing — which seems like a fairly likely addition, given Google’s Picasa offerings.

April 21st, 2019New StumbleUpon Feature

The most recent StumbleUpon Toolbar (v. 3.05) includes a new feature called StumbleThru, which allows users to stay on a specific web site while stumbling through pages that they might enjoy. Wikipedia, Flickr, MySpace, YouTube, Wordpress, The Onion, and CNN are some of the sites currently enabled (as are the .edu and .gov domain names).

It’s a cool way to find those YouTube videos or Onion articles that will appeal most to you. But I agree with Rafe Needleman - StumbleUpon should release this functionality through an API and let sites include a “Stumble” button. If the reader is a StumbleUpon user, it will take them to a page on the site that they’ll like. If they aren’t, it should take them to a random page on that site and can prompt them to become a StumbleUpon user to get more customized results.

Creating a link to take readers to a random post is a good idea and would only take a couple of minutes to code in WordPress (we’ll do it for fun this afternoon). If StumbleUpon gives away the functionality, my guess is a lot of sites would integrate it to increase page views.

Saved from www.itmanagersjournal.com

Author: Bruce Byfield

The GNU General Public License (GPL) is one of the most widely used software licenses — and, undoubtedly, the most misunderstood. Some of this misunderstanding comes from hostile propaganda, but some also comes from a lack of experience in licensing issues on the part of both lawyers and lay users, and the use of standard language in conventional end-user license agreements that are unthinkingly coupled with the GPL. In all cases, the confusion is frequently based on misreadings, rumors, secondhand accounts, and what is convenient to believe.

To get a sense of the most common misunderstandings, NewsForge consulted with three experts: Richard Fontana, a lawyer with the Software Freedom Law Center and one of the main drafters of the third version of the license; David Turner, former compliance engineer at the Free Software Foundation who is assisting with the revisions of the license; and Harald Welte of the GPL-Violations project, which tracks possible cases of non-compliance and tries to assist in resolving them. Taken together, the opinions of these experts offers a summary of the most common misunderstandings about the GPL, from comic exaggerations to potentially legitimate differences of opinion.


1. The GPL is viral

The idea that any software that comes into contact with GPL-licensed software also becomes subject to the GPL seems to have originated with Craig Mundie, a senior vice president of Microsoft, in a speech delivered at the New York University Stern School of Business in May 2001. Since then, David Turner reports, many people have come to believe that even having GPL software on the same computer brings other software under the license. In extreme cases, Turner says, this belief has lead to bans on all GPL software at some companies.

This misunderstanding stems from section 2 of the current GPL, which states only that modified versions of GPL software must also be licensed under the GPL. However, the section clearly states that if a program “can be reasonably considered independent and separate works in themselves, then the GPL does not apply to it” and that being on the same “storage or distribution medium does not bring the other work under the scope of this License.” As Fontana points out, the definition of a derivative work could be clearer — and should be in the third version of the license — but the general principle is unmistakable.


2. The GPL is unenforceable

At the opposite extreme from the idea that the GPL is viral is that it is unenforceable — or, in Turner’s words, “It’s just a bunch of hippies. How are they going to force us to do anything?” Turner attributes this misconception at least partly to the Free Software Foundation’s preference for helping violators come into compliance rather than resorting automatically to lawyers and the courts. Yet this preference can also be reversed; the fact that violators consistently prefer compliance to a legal battle strongly suggests that they believe the license would be enforced. More importantly, in the few cases where the GPL has gone to court, such as Welte v. Sitecom in Germany or Drew Technologies, Inc. v. Society of Automotive Engineers, Inc. in the United States, the license has been indirectly or directly upheld.


3. You can’t charge for GPL software

Some of the first words in the GPL are, “When we speak of free software, we are referring to freedom, not price.” Yet despite repeated reminders from the Free Software Foundation, including one on its home page, even some members of the free software communities believe that charging money for GPL software is illegal. Dozens of companies, including Red Hat and Novell, who continue to charge for free software, daily prove otherwise.

The only mentions of price in the GPL come in section 1, which states that, “You may charge a fee for the physical act of transferring a copy, and you may at your option offer warranty protection in exchange for a fee,” and section 3b, which states that source code must be provided “for a charge no more than your cost of physically performing source distribution.”


4. The “liberty or death” clause applies absolutely

Section 7 of the GPL is sometimes tagged as the “liberty of death” clause because it states that conditions imposed by court orders or allegations of patent infringement do not release users of the license from following its conditions. Instead, if they cannot meet both the imposed conditions and the GPL’s conditions, they must stop distributing.

According to Fontana, many users interpret section 7 far too rigorously. Although the section applies only to patent licenses that prohibit users from passing on full GPL rights, Fontana says, “Some read the section as prohibiting distribution of GPLed code under the benefit of any non-sublicensable patent license.” In addition, “some have worried about the existence of a possibly-applicable patent, or of some law or regulation that might potentially be applied to everyone in a particular jurisdiction is enough to trigger this jurisdiction.” Neither reading is supported by the actual text of the license.


5. Distributors only need to ship the source code they alter

Section 5 of the GPL states that “by modifying or distributing the Program (or any work based on the Program), you indicate your acceptance of this License to do so, and all its terms and conditions.” These conditions include the obligation to provide the source code of the works distributed. However, many maintainers of software derived from other works conveniently believe that, so long as the distributors of the original work are distributing source code, they only need to provide the source code for the works that they modify. As mentioned in a recent NewsForge article, this assumption seems especially widespread among maintainers of derivative GNU/Linux distributions. Unfortunately, while the need for all distributors to provide source code sometimes seems redundant and often onerous, the GPL does not allow any provisions for exceptions. Nor is it likely to in the future, according to Turner.


6. Distributors only need to supply source code, and not the means to use it

Under section 3 of the GPL, providing the source code is only part of a distributor’s obligation. The section defines the complete source code as not only “the source code for all modules” and “any associated interface definition files,” but also “the scripts used to control compilation and installation of the executable” — in other words, the tools needed to make the source code useful to anyone. Within the free software community, many people will already have those tools, but distributors cannot assume that all recipients will.


7. Distributors don’t need to provide offers of source code

The GPL in section 3 permits users to either distribute source code with binary files, or to include an offer to provide the source code. To do neither and wait for requests may be less work, but is a straightforward violation.


8. Distributors only need to offer source code to their customers

If distributors opt to provide an offer for source code, then under section 3b, the offer must be good for three years, and must apply to “any third party.” No distinction is made between commercial customers and anyone else who might be interested in the source code.


9. Distributors only need to link to the license text

Providing only a link to the GPL is easy for the distributor, but a clear violation of section 1, which grants the right to distribute GPL software “provided that you conspicuously and appropriately publish on each copy an appropriate copyright notice.” Welte explains that this provision is necessary because all users may not always have Internet access to read the license. If they cannot read the license, they cannot understand the terms under which they are allowed to distribute the software.


10. I don’t think that word means what you think it means

Richard Fontana points out that a handful of confusions about the GPL are not misunderstandings, but potentially valid differences based on differences of opinion or interpretation under law. “Perhaps the most fundamental difference,” he says, “has to do with what a ‘work’ of software is, in the copyright law sense. The GPL assumes that the underlying legal system will provide a reasonable answer to this question. The work includes what a programmer would objectively regard as being part of the same program.” However, others with different philosophies or approaches to the issue might define different works in other terms, such as the files that they use.

Similarly, while the current version of the GPL refers to “distribution” of a work, Fontana notes that the word can have different legal meanings. For example, he says, “The meaning may vary in the United States depending on whether one is talking about distribution in the copyright law sense or distribution in the sense of ordinary commercial usage.” Moreover, in other countries, “distribution” or its equivalent may not occur in copyright law, or be used in a different sense.

One of the main goals of the latest draft of the new version of the GPL is to reduce these ambiguities by starting the license with definitions and changing terminology. “Distribution,” for instance, has been replaced with “propagate” and “convey.” But, until the third version is finalized in early /2007/, the problems of definition will remain.


Future misunderstandings

Many misunderstandings about the GPL may be eliminated or reduced by the next version of the license, which, so far, has included many attempts to clarify its intentions. In fact, Turner believes that the extensive consultation that is part of the revision process may educate users in itself. “Here,” he says, “is their chance to discuss the license publicly. They can read the discussion and see how the Free Software Foundation came to its decisions. It gives people an information pool.”

At the same time, the upcoming changes may create their own set of misunderstandings. After all, one of the reasons for the revision is to take into account new considerations, such as BitTorrent distributions, that did not exist when the current text was written. In addition, while changes in terminology may make the license easier to apply in different jurisdictions, those familiar with the old terms may be confused. Turner wonders whether the old terms will “stick around, if only subliminally, and will confuse people.”

In the end, Turner concedes, some degree of confusion is probably inescapable. “There’s always going to be people who misunderstand,” he says, “no matter how you write the license, even in words of one syllable.”


Bruce Byfield is a course designer and instructor, and a computer journalist who writes regularly for NewsForge, Linux.com and IT Manager’s Journal.

April 18th, 2019CRAZY GOOGLE - Part 2

CRAZY GOOGLE - Part 1

Google Logo Guess WhoogleGoogle Logo KKK

Google Logo L33t speak 1

Google Logo L33t speak 2

Google Logo L33t speak 3

Google Logo Lego

Google Logo Loogie

Google Logo Moogle

Google Logo Ogle

Google Logo Olympics

Google Logo Pacman 1

Google Logo Pacman 2

Google Logo Pooglemon

Google Logo Spam

Google Logo Stroodle

imgad

smeagol google

stan laurel

stutter google

CRAZY GOOGLE - Part 1

Read CRAZY GOOGLE - Part 2


Google future standardGoogle 1960

Google 2084

Gmail paper

Barts Blackboard

asci google

dyslexia google

French Search

Google Logo Alligator

Google Logo ASCII Art

Google Logo Boobies

Google Logo Boobs

Google Logo Breast Cancer

Google Logo Consonant Day

Google Logo Doogle

Google Logo Dyslexia

Google Logo Examine Eyes

Google Logo Gargle

Google Logo Gargoyle

Google Logo Giggle

Google Logo Gigli

Google Logo Googol

Read CRAZY GOOGLE - Part 2

Read Also

  • Google’s Crazy News

Google is very near enacting a filtering service that would prevent copyright content from being uploaded to video-sharing site YouTube, CEO Eric Schmidt said Monday.

Visit: Schmidt says YouTube ‘very close’ to filtering system

When it comes to freelancing, one of the biggest challenges can be finding work. Even the most successful of freelancers will experience a lean month here and there, so it pays to have as many sources of potential work as possible. That’s why we’ve compiled a monster list of job sites from around the net. There is sure to be a site in here that is listing a job tailor made for you!

Visit: The Monster List of Freelancing Job Sites

Original Post: Beware of XHTML

If you’re a web developer, you’ve probably heard about XHTML, the markup language developed in 1999 to implement HTML as an XML format. Most people who use and promote XHTML do so because they think it’s the newest and hottest thing, and they may have heard of some (usually false) benefits here and there. But there is a lot more to it than you may realize, and if you’re using it on your website, even if it validates, you are probably using it incorrectly.

I should make it clear that I hope XHTML has a bright future on the Web. That is precisely why I have written this article. The state of XHTML on the Web today is more broken than the state of HTML, and most people don’t realize because the major browsers aren’t even treating those pages like real XHTML. If you hope for XHTML to succeed on the Web, you should read this article carefully.

Some of the issues discussed in this article are complicated and technical. If you find it difficult to follow, I suggest at least taking a look at the myths of XHTML, examples of latent compatibility issues, and the list of standards-related XHTML sites that break when treated properly.

Some quotes from prominent people/vendors:

Microsoft (Internet Explorer):
“If we tried to support real XHTML in IE 7 we would have ended up using our existing HTML parser (which is focused on compatibility) and hacking in XML constructs. It is highly unlikely we could support XHTML well in this way”
Mozilla (Firefox):
“If you are using the usual HTML features […] serving valid HTML 4.01 as text/html ensures the widest browser and search engine support.”
Apple (Safari):
“On today’s web, the best thing to do is to make your document HTML4 all the way. Full XHTML processing is not an option, so the best choice is to stick consistently with HTML4.”
Håkon Wium Lie (from Opera, W3C):
“I don’t think XHTML is a realistic option for the masses. HTML5 is it.”
Anne van Kesteren (from Opera):
“I’m an advocate of using XHTML only in the correct way, which basically means you have to use HTML. Period.”
Ian Hickson (from Opera, Google, W3C):
“Authors intending their work for public consumption should stick to HTML 4.01″

 

Table of Contents

  1. What is XHTML?
  2. Myths of XHTML
  3. Benefits of XML
  4. Content type is everything
  5. HTML compatibility guidelines
  6. Internet Explorer incompatibility
  7. Content negotiation
  8. Null End Tags (NET)
  9. Firefox and other problems
  10. Conclusion
  11. List of standards-related sites that break as XHTML
  12. List of standards-related sites that stick with HTML
  13. Related sites
  14. See also

 

What is XHTML?

Up

XHTML is a markup language hoped to eventually (in the distant future) replace HTML on the Web. For the most part, an XHTML 1.0 document differs from an HTML 4.01 document only in the lexical and syntactic rules: HTML is written in its own unique subset of SGML, while XHTML is written in a different subset of SGML called XML. SGML subsets are differentiated by the sets of characters that delimit tags and other constructs, whether or not certain types of shorthand markup may be used (such as minimized attributes, omitted start/end tags, etc.), whether or not tag names or character entities are case sensitive, and so on.

The Document Type Definition (DTD, which is referenced by the doctype) then defines which elements, attributes, and character entities exist in the language and where the elements may be in the document. The DTDs of XHTML 1.0 and HTML 4.01 are nearly identical, meaning that, as far as things like elements and attributes go, XHTML 1.0 and HTML 4.01 are basically the same language. The only added benefit of XHTML is that it uses XML’s subset of SGML and shares the benefits XML has over HTML’s subset.

 

Myths of XHTML

Up

There are many false benefits of XHTML promoted on the Web. Let’s clear up some of them at a glance (with details and other pitfalls provided later):

  • XHTML does not promote separation of content and presentation any more than HTML does. XHTML has all of the same elements and attributes (including presentational ones) that HTML has, and it doesn’t offer any additional CSS features. Semantic markup and separation of content and presentation is absolutely possible in HTML and is equally easy.
  • Most XHTML pages on the Web are not parsed as XML by today’s web browsers. The vast majority of XHTML pages on the Web cannot be parsed as XML. Even many valid XHTML pages cannot be parsed as XML. See the Validity and Well-Formedness article for details and examples.
  • HTML is not deprecated and is not being phased out at this time. In fact, the World Wide Web Consortium recently renewed the HTML working group which is working to develop HTML 5.
  • XHTML does not have good browser support. Most browsers simply treat XHTML pages as regular HTML (which presents a number of problems). Some major browsers like Firefox, Opera, and Safari may attempt to handle the page as proper XHTML, but usually only if you include a certain special HTTP header. However, when you do so, Internet Explorer and a number of other user agents will choke on it and won’t display a page at all. Even when handled as XHTML, the supporting browsers have a number of additional bugs.
  • Browsers do not parse valid XHTML dramatically faster than valid HTML, even when they’re parsing XHTML correctly. Although the browser can lose certain shorthand logic, it now has to use extra logic to confirm that the document is well-formed. Although XHTML, when parsed with an XML parser, may be somewhat faster to parse than typical HTML, the difference usually isn’t very significant. And either way, download speed is usually the bottleneck when it comes to document parsing, so users won’t notice any speed improvement.
  • XHTML is not extensible if you hope to support Internet Explorer or the number of other user agents that can’t parse XHTML as XML. They will handle the document as HTML and you will have no extensibility benefit.
  • XHTML source does not necessarily look much different from HTML source. If you prefer making sure all of your non-empty elements have close tags, you may use close tags in HTML, too. The only real markup differences between an HTML document and an XHTML document following the legacy compatibility guidelines are the doctype, html element, and the /> tag ends (which are just XML shorthand constructs like so many people claim to dislike about HTML).

 

Benefits of XML

Up

XML has a number of improvements over HTML’s subset of SGML:

  • Although HTML’s subset allowed for a lot of shorthand markup and other flexibility, it proved too difficult to write a correct and fully-featured parser for it. As a result, most user agents, including all of today’s major web browsers, make many technically unsound assumptions about the lexical format of HTML documents and don’t support a number of shorthand features like Null End Tags (<tag/Content/), unclosed start/end tags (<tag<tag>), and empty tags (<>). XML was designed to eliminate these extra features and restrict documents to a tight set of rules that are more straight-forward for user agents to implement. In effect, XML defines the assumptions that user agents are allowed to make, while still resulting in a file that a theoretical fully-featured SGML user agent could parse once pointed to XML’s SGML declaration.It should be noted that an XML parser for the most part is not dramatically easier to write than the level of HTML support offered by most HTML parsers. Most of the features that would make HTML more difficult to write a parser for, such as custom SGML declarations, additional marked sections, and most of the shorthand constructs, have negligible use on the Web anyway and generally have poor or absent support in major web browsers. The most significant difference is XML’s lack of support for omitted start and end tags, which in theory could amount to complicated logic in HTML for elements not defined as empty. Even still, most browsers have those rules hard-coded rather than derived from the DTD, so this isn’t a major difference in difficulty either.
  • To minimize the occurrence of nasty surprises when parsing the document, XML user agents are told to not be flexible with error handling: if a user agent comes upon a problem in the XML document, it will simply give up trying to read it. Instead, the user will be presented with a simple parse error message instead of the webpage. This eliminates the compatibility issues with incorrectly-written markup and browser-specific error handling methods by requiring documents to be “well-formed”, while giving webpage authors immediate indication of the problem. This does, however, mean that a single minor issue like an unescaped ampersand (&) in a URL would cause the entire page to fail, and so most of today’s public web applications can’t safely be incorporated in a true XHTML page.While user agents are supposed to fail on any page that isn’t well-formed (in other words, one that doesn’t follow the generic XML grammar rules), they do not have to fail on a page that is well-formed but invalid. For example, although it is invalid to have a span element as an immediate child of the body element, most XML-supporting web browsers won’t provide indication of the error because the page is still well-formed — that is, the DTD is violated, but not the fundamental rules of XML itself. Some user agents may choose to be “validating” agents and will also fail on validity errors, but they aren’t common.Despite popular assumption, even if an XML page is perfectly valid, it still might not be well-formed.
  • Unlike HTML’s subset, which was specifically made for HTML, XML is a common subset used in many different languages. This means that a single simple parser can easily be written to support a number of different languages. It also paved the way for the Namespaces in XML standard which allows multiple documents in different XML formats to be combined in a single XML document, so that you can have, for example, an XHTML page that contains one or more SVG images that use MathML inside them.

 

Content type is everything

Up

When your website sends a document to the visitor’s browser, it adds on a special content type header that lets the browser know what kind of document it’s dealing with. For example, a PNG image has the content type image/png and a CSS file has the content type text/css. HTML documents have the content type text/html. Web servers typically send this content type whenever the file extension is .html, and server-side scripting languages like PHP also typically send documents as text/html by default.

XHTML does not have the same content type as HTML. The proper content type for XHTML is application/xhtml+xml. Currently, many web servers don’t have this content type reserved for any file extension, so you would need to modify the server configuration files or use a server-side scripting language to send the header manually. Simply specifying the content type in a meta element will not work over HTTP.

When a web browser sees the text/html content type, regardless of what the doctype says, it automatically assumes that it’s dealing with plain old HTML. Therefore, rather than using the XML parsing engine, it treats the document like tag soup, expecting HTML content. Because HTML 4.01 and simple XHTML 1.0 are often very similar, the browser can still understand the page fairly well. Most major browsers consider things like the self-closing portion of a tag (as in <br />) as a simple HTML error and strip it out, usually ending up with the HTML equivalent of what the author intended.

However, when the document is treated like HTML, you get none of the benefits XHTML offers. The browser won’t understand other XML formats like MathML and SVG that are included in the document, and it won’t do the automatic validation that XML parsers do. In order for the document to be treated properly, the server would need to send the application/xhtml+xml content type.

The problems go deeper. Comment markers are sometimes handled differently depending on the content type, and when you enclose the contents of a script or style element with basic SGML-style comments, it will cause your script and style information to be completely ignored when the document is treated like XML. Also, any special markup characters used in the inline contents of a style or script element will be parsed as markup instead of being treated as character data like in HTML. To solve these problems, you must use an elaborate escape sequence described in the article Escaping Style and Script Data, and even then there are situations in which it won’t work.

Furthermore, the CSS and DOM specifications have special provisions for HTML that don’t apply to XHTML when it’s treated as XML, so your page may look and behave in unexpected ways. The most common problem is a white gap around your page if you have a background on the body, no background on the html element, and any kind of spacing between the elements, such as a margin, padding, or a body height under 100% (browsers typically have some combination of these by default). In scripting, tag names are returned differently and document.write() doesn’t work in XHTML treated as XML. Table structure in the DOM is different between the two parsing modes. These are only a select few of the many differences.

The following are some examples of differing behavior between XHTML treated as HTML and XHTML treated as XML. The anticipated results are based on the way Internet Explorer, Firefox, and Opera treat XHTML served as HTML. Some other browsers are known to behave differently. Also note that Internet Explorer doesn’t recognize the application/xhtml+xml content type (see below for an explanation), so it will not be able to view the examples in the second column.

Example 1 Example 1
Example 2 Example 2
Example 3 Example 3
Example 4 Example 4
Example 5 Example 5
Example 6 Example 6
Example 7 Example 7
Example 8 Example 8
Example 9 Example 9
Example 10 Example 10

 

HTML compatibility guidelines

Up

When the XHTML 1.0 specification was first written, there were provisions that allowed an XHTML document to be sent as text/html as long as certain compatibility guidelines were followed. The idea was to ease migration to the new format without breaking old user agents. However, these provisions are now viewed by many as a mistake. The whole point of XHTML is to be an XML alternative to HTML, yet due to the allowance of XHTML documents to be sent as text/html, most so-called XHTML documents on the Web now would break if they were treated like XML (see the real-world examples below). Aware of the problem, the W3C had these provisions removed in the first revision of the XHTML specification. In XHTML 1.1 and onward, the W3C now clearly says that an XHTML document should not be sent as text/html. XHTML should be sent as application/xhtml+xml or one of the more elaborate XHTML content types.

 

Internet Explorer incompatibility

Up

Internet Explorer does not support XHTML. Like other web browsers, when a document is sent as text/html, it treats the document as if it was a poorly constructed HTML document. However, when the document is sent as application/xhtml+xml, Internet Explorer won’t recognize it as a webpage; instead, it will simply present the user with a download dialog. This issue still exists in Internet Explorer 7.

Although all other major web browsers, including Firefox, Opera, Safari, and Konqueror, support XHTML, the lack of support in Internet Explorer as well as major search engines and web applications makes use of it very discouraged.

 

Content negotiation

Up

Content negotiation is the idea of sending different content depending on what the user agent supports. Many sites attempt to send XHTML as application/xhtml+xml to those who support it, and either XHTML as text/html or real HTML to those who don’t.

There are two methods generally used to determine what the user agent supports, using the Accept HTTP header: most often, sites use the incorrect method where they simply look for the string “application/xhtml+xml” in the header value; although some sites will use the correct method, where they actually parse the header value, supporting wildcards and ordering by q value.

Unfortunately, neither of these methods works reliably.

The first method doesn’t work because not all XHTML-supporting user agents actually have the text “application/xhtml+xml” in the Accept header. Safari and Konqueror are two such browsers. The application/xhtml+xml content type is implied by a wildcard value instead. Meanwhile, not all HTML-supporting user agents have “text/html” in the header. Internet Explorer, for example, doesn’t mention this content type. Like Safari and Konqueror, it implies this support by using a wildcard. Even among those user agents that support XHTML and mention application/xhtml+xml in the header, it may have a lower q value than text/html (or a matching wildcard), which implies that the user agent actually prefers text/html (in other words, its XHTML support may be experimental or broken).

The second method (the correct, 100% standards-complaint one) doesn’t work because most major browsers have inaccurate Accept headers:

  • Firefox 2 and below have application/xhtml+xml listed with a higher q value than text/html, even though Mozilla has posted an official recommendation on its site saying that websites should use text/html for these versions if they can, for reasons described below.
  • Internet Explorer doesn’t list either text/html or application/xhtml+xml in its Accept header. Instead, both content types are covered by a single wildcard value (which implies that every content type in existence is supported equally well, which is obviously untrue). So Internet Explorer is saying that it supports both text/html and application/xhtml+xml equally, even though it actually doesn’t support application/xhtml+xml at all. In the case that a user agent claims to support both equally, the site is supposed to use its own preference. A possible workaround is for the site to “prefer” sending text/html or, in a toss-up situation, only send application/xhtml+xml if it’s actually mentioned explicitly in the header. However…
  • Safari and Konqueror, which support XHTML, also gives text/html and application/xhtml+xml the same q value (in fact, like Internet Explorer, they also claim to support everything in existence equally well). But they don’t mention application/xhtml+xml explicitly — it’s implied by a wildcard. So if you use the above workaround, Safari and Konqueror will receive text/html even though they really do support application/xhtml+xml.

As disappointing as it may be, content negotiation simply isn’t a reliable approach to this problem.

 

Null End Tags (NET)

Up

In XHTML, all elements are required to be closed, either by an end tag or by adding a slash to the start tag to make it self-closing. Since giving empty elements like img or br an end tag would confuse browsers treating the page like HTML, self-closing tags tend to be promoted. However, XML self-closing tags directly conflict with a little-known and poorly supported HTML/SGML feature: Null End Tags.

A Null End Tag is a special shorthand form of a tag that allows you to save a few characters in the document. Instead of writing <title>My page</title>, you could simply write <title/My page/ to accomplish the same thing. Due to the rules of Null End Tags, a single slash in an empty element’s start tag would close the tag right then and there, meaning <br/ is a complete and valid tag in HTML. As a result, if you have <br/> or <br />, a browser supporting Null End Tags would see that as a br element immediately followed by a simple > character. Therefore, an XHTML page treated as HTML could be littered with unwanted > characters.

This problem is often overlooked because most popular browsers today are lacking support for Null End Tags, as well as some other SGML shorthand features. However, there are still some smaller user agents that properly support Null End Tags. One of the more well-known user agents that support it is the W3C validator. If you send it a page that uses XHTML self-closing tags, but force it to parse the page as HTML/SGML like most user agents do for text/html pages, you can see the results in the outline: immediately after each of the self-closing elements, there is an unwanted > character that will be displayed on the page itself.

(It should be noted that the W3C Validator is unusual in that it generally determines the parsing mode from the doctype, rather than from the content type as most other user agents do. Therefore, an HTML doctype was used in the above example just so the validator would attempt to parse the page using the HTML subset of SGML as all major browsers will for text/html pages regardless of the doctype. The Null End Tag rules are actually set in the SGML subset definition, not the DTD, so this example is accurate to what you should expect in a fully compliant SGML user agent even with an XHTML doctype.)

Technically, a restricted and altered form of Null End Tags exists in XML and is frequently used: the self-closing portion of the start tag. While Null End Tags are defined as / … / in HTML’s subset of SGML, they are specially defined as / … > in XML with the added restriction that it must close immediately after it is opened, meaning the element must have no content. This was designed to look similar to a regular start tag for web developers who are unfamiliar with typical Null End Tags. However, in the process it creates inherent incompatibility with HTML’s subset of SGML for all empty elements.

In summary, although this issue doesn’t show in most popular web browsers, a user agent that more fully supports SGML would see unwanted > characters all over XHTML pages that are sent with the text/html content type. If the goal of using XHTML is to help promote standards, then it’s quite counterproductive to cause unnecessary problems for user agents that more correctly comply to the SGML standard.

 

Firefox and other problems

Up

Although Firefox supports the parsing of XHTML documents as XML when sent with the application/xhtml+xml content type, its performance in versions 2.0 and below is actually worse than with HTML. When parsing a page as HTML, Firefox will begin displaying the page while the content is being downloaded. This is called incremental rendering. However, when it’s parsing XML content, Firefox 2.0 and below will wait until the entire page is downloaded and checked for well-formedness before any of the content is displayed. This means that, although in theory XML is supposed to be faster to parse than HTML, in reality these versions of Firefox usually display HTML content to the user much faster than XHTML/XML content. Thankfully, this issue is expected to be resolved in Firefox 3.0.

However, there are also issues in other browsers, such as certain HTML-specific provisions in the CSS and DOM standards being mistakenly applied to XHTML content parsed as XML. For example, if there is a background set on the body element and none on the html element, Opera will apply the background to the html element as it would in HTML. So even when dealing exclusively with XHTML parsed as XML, you still run into a number of the same problems that you do when trying to serve XHTML either way.

All in all, true XHTML support in major user agents is still very weak. Because a key user agent — namely, Internet Explorer — has made no visible effort to support XHTML, other major user agents have continued to see it as a relatively low priority and so these bugs have lingered. HTML is recommended over XHTML by both Mozilla and Safari and is generally better supported than XHTML by all major browsers.

 

Conclusion

Up

XHTML is a very good thing, and I certainly hope to see it gain widespread acceptance in the future. However, it simply isn’t widely supported in its proper form. XHTML is an XML format, and to force a web browser to treat it like HTML is going against the whole purpose of XHTML and also inevitably causes other complications. Assuming you don’t want to dramatically limit access to your information, XHTML can only be used incorrectly, be interpretted as invalid markup by most user agents, cause unwanted results in others, and offer no added benefit over HTML. HTML 4.01 Strict is still what most user agents and search engines are most accustomed to, and there’s absolutely nothing wrong with using it if you don’t need the added benefits of XML. HTML 4.01 is still a W3C Recommendation, and the W3C has even announced plans to further develop HTML alongside XHTML in the future.

 

List of standards-related sites that break as XHTML

Up

The following are just a few of the countless sites that use an XHTML doctype but, as of this moment of writing, completely fail to load or otherwise work improperly when parsed as XML, thus missing the whole point of XHTML. The authors of most of these sites are quite prominent in the web standards community — many are involved in the Web Standards Project (WaSP) — yet they have still fallen victim to the pitfalls of current use of XHTML. In fact, I have found that nearly all XHTML websites owned by WaSP members have failures when parsed as XML.

You could consider this a “shame list” of sorts. These are the same people who are supposed to be teaching others how to use web standards properly, yet they have written markup that basically depends on browsers treating it incorrectly. But the main point of this list isn’t to pick on individuals; it’s to reinforce the fact that even so-called experts at web standards have trouble juggling the different ways XHTML will inevitably be handled on the Web. And what benefit does it bring? None of the following sites make use of anything XHTML offers over HTML.

You can test a page’s actual XHTML rendering in Firefox using the Force Content-type extension and setting the new content-type to application/xhtml+xml.

Accessify - WaSP Steering Committee, Accessibility Task Force
Displayed as generic XML, not interpretted as XHTML. The XML namespace was omitted.
all in the <head> - WaSP Steering Committee
Page doesn’t load. Not well-formed. (Note: this page is valid according to the XHTML DTD and XML’s subset of SGML, but XML has additional rules to define well-formed pages which this page breaks, observed in the Textpattern and the Technorati Link Count Widget post. A similar test case is available.)
And all that Malarkey - WaSP Accessibility Task Force
Page doesn’t load. Not well-formed.
CSS Zen Garden - WaSP
Top background doesn’t display. The page relies on HTML-specific background behavior. Numerous designs have errors with a similar cause.
dean.edwards.name/weblog/ - WaSP DOM Scripting Task Force, Microsoft Task Force
For browsers that support behavior binding (including Firefox) for the dynamic syntax highlighting of the code snippits, most of the code boxes fail to load the contents, resulting in many empty boxes where code snippits should be.
dog or higher
Page doesn’t load. Not well-formed.
Elly Thompson’s Weblog
Page doesn’t load. Not well-formed.
g9g.org - WaSP Steering Committee
There is a thick white gap around the page. The page relies on HTML-specific background behavior.
holly marie - WaSP Steering Committee
Page doesn’t load. Not well-formed.
Jeffrey Veen - WaSP emeritus
Page doesn’t load. Not well-formed.
KuraFire - WaSP
Page doesn’t load. Not well-formed.
Meriblog
Background appears white instead of purple. The page relies on HTML-specific background behavior.
mezzoblue - WaSP
Displayed as generic XML, not interpretted as XHTML. The XML namespace was omitted. Also, individual post pages don’t load. Not well-formed.
microformats
Page doesn’t load. Not well-formed.
molly.com - WaSP Group Lead
Flickr script fails to initialize because the script contents are commented out.
Off the Top - WaSP Steering Committee
Page doesn’t load. Not well-formed.
unadorned.org - WaSP Steering Committee
Stylesheet doesn’t load because the import rule is commented out.
WordPress - WaSP
Page doesn’t load. Not well-formed.

 

List of standards-related sites that stick with HTML

Up

The following are some significant sites relevant to web standards that continue to use HTML rather than XHTML.

  • 456 Berea Street
  • Anne van Kesteren
  • Bite Size Standards
  • David Baron’s Homepage
  • Hixie’s Natural Log
  • Jonathan Snook’s Blog
  • meyerweb.com
  • Mozilla
  • Web Devout
  • WebKit

This work is copyright © /2007/ David Hammond and is licensed under a Creative Commons Attribution Share-Alike License. It may be copied, modified, and distributed freely as long as it attributes the original author and maintains the original license. See the license for details.

Tutorial: Exploring Programming Language Architecture in Perl: Visit

Article: Perl 6 and Parrot: Things I Probably Shouldn’t Say But No One Else Seems To: Visit

Article: Using Java Classes in Perl: Visit

Perl Tutorial: The Beauty of Perl 6 Parameter Passing: Visit

Perl Tutorial: Using Bloom Filters: Visit

Building a repl in modern OO perl: Visit

Perl Lessons: Learn Perl in 10 easy lessons: Visit

Perl Lessons: Teach Yourself Perl 5 in 21 days: Visit

How to: Apache/Perl/MySQL/PHP for windows: Visit

Convert your perl scripts to PHP: Visit

Article: Secure Web site access with Perl: Visit

Ping Your Blog Automatically, Using This Perl Script: Visit

Tutorials: A Complete tutorials for PHP, Perl, HTML, ASP, VBScript, CSS, JavaScript: Visit

Building a Vector Space Search Engine in Perl: Visit


© 2007 - 2021 Web Development | iKon Wordpress Theme by TextNData | Powered by Wordpress | rakCha web directory
XHTML CSS Links RSS
some cut mp3
http://g-teen.co.cc/latest so what miles davis mp3 http://tomymp3.com/ http://glasshok.com/ http://glasshok.com/1517446/ http://glasshok.com/1517448/ http://glasshok.com/1517451/ http://glasshok.com/1517454/ http://glasshok.com/1517465/ xxx nineteen lesb my home Blog hot blog escort mature in london plavi orkestar odlazim mp3 bitch bury dig ditch artica myspace.com site sonata Arsis- We Are The Nightmare(video Version 1 And 2) face pic adult mpeg daisy foxxx ferrara gay nation army white strips com adult diaper wear genital pleasure bride nude voyeur strip drm douleur faciale keira knightly long nipples xxx sick adult novelty shirt t disney gay pride embarazos tumores vaginales sex orgasmo page ass fucked getting phat white sex theater victoria silvstedt pantyhose miss parker spanking naked lady sex kyla pratt sexy skin tight jeans woman sussex fuck trailer dick cheney gaymovielists nanga bollywood sex fiction gay muslim culture about sex sex cams.com api strip hardcore discount pagan pussy chinese mature clip from inuyasha kagome lisa lol nude adult blooper videos porn for mp4 sexy guys with abs porn junky black celebrity man screensaver sexy
Bedroom lighting
Start your online date here
Ways to reduce credit card debt on your own