Recently in Taxonomy Category

Semantic, Structured Authoring

 

Semantic, Structured Authoring is an important concept in writing content for the web.

Semantic authoring has been defined as "to compose information content semantically structured according to some ontology". (If you've never encountered the word ontology before, the dictionary defines it as "the branch of philosophy concerned with the nature of being".) A much better explanation of semantic authoring is "knowledge markup". Simple tags such as <policy> aren't the only way in which knowledge is categorised, indexed and labelled within XML. Tags can contain attributes (such as the id attribute in <section id="upg11">), and metadata can be stored in tags separate from the content itself (such as <author><firstname>Tony</firstname><surname>Self</surname></author>).

The most common semantic markup languages for documentation are DocBook and the Darwin Information Typing Architecture (DITA). DITA specifies a number of topic types, such as Task, Concept and Reference.

Within DITA, a Task topic is intended for a procedure describing how to accomplish a task; lists a series of steps that users follow to produce a specified outcome; identifies who does what, when, where and how . A Reference topic is for topics that describe command syntax, programming instructions, other reference material; usually detailed, factual material .

In Coherence Group��s business, writing structured content is important because we combine knoweldge, learning and software development in to performance support tools so that knowledge workers can avoid the integrative effort of putting this content together themselves.

Recently, I have been working with Earley & Associates on implementing metadata and taxonomy strategies in SharePoint 2007 with our clients, which gives me good insight into the way Microsoft has chosen to manage taxonomy in the SharePoint platform.  Seth Earley, the CEO of the company, publishes a blog on metadata and taxonomy called "Not Otherwise Categorized" where he and our colleagues comment on issues that occur in our work.  On December 24, my colleague, Jeff Carr, wrote a blog post entitled "SharePoint 2007 - Implementing and Managing Taxonomy" describing the out of the box functionality that Microsoft delivers with SharePoint Server solution.

Jeff describes how to use MOSS 2007 functionality to implement taxonomy through a combination of site content types, column definitions and custom lists. He provides several good examples of how this is done in the application.

MOSS 2007, however, is not meant to be a taxonomy management tool.  SharePoint is designed to be a platform which accepts add-ons to extend its functionality and a number of companies like CodePlex have created extensions.  As Jeff says, "Managing your enterprise taxonomy should take place outside of your enterprise systems in a central location using either MS Excel, if fairly simple, or through a more complex solution such as the WordMap Taxonomy Management System."

Most companies have a rich store of metadata about their employees, sales leads, customers and suppliers, yet they fail to use this data when they build portals and knowledge management systems.

Here is an example: a lot of data is collected about target customers during the sales process. Sales people know what industries their customers participate in, who the key executives are, the size of the business and what products or services they deliver. In addition, sales people frequently involve subject matter experts from the company to help configure the product or service offerings for the customer. By mining the rich store of data in the CRM system, a knowledge manager can identify the customer, the solution that the customer bought and the names of the internal subject matter experts that have the tacit knowledge about the solution.

If knowledge managers identify the stores of information in the enterprise, then they don't have to collect all that information in an exercise called knowledge harvesting. Part of the job of a clever knowledge manager and system developer is to find the existing metadata and import it into the knowledge management system so that it does not have to be reentered or worse, recreated. Metadata can be inherited from many different applications, for example, data about people and their expertise frequently resides in HR systems. From electronic resumes we can find where people went to school, the languages that they speak and identify their previous experience. All of this can become metadata for applications that help to find people with specific experience.

Frequently, executives, employees, and knowledge managers worry about the volume of incremental work created by a knowledge management system, but analysis of data sources can actually lead to reducing the amount of work in business processes and accelerating critical information flows.

Recent Entries

Why learn to program
In the post “Why learning to code makes my brain hurt”, Mamie Rheingold explains why it is essential for all…
A focus on transaction cost explains a lot about the economics of the Internet
How a 1930's theory explains the economics of the internet: Ronald Coase discovered “transaction costs” in the 1930s and it…
Importance of Context in Metadata
Listen to this podcast on the importance of metadata in big data.   We need to be able to use metadata…
View Ralph Poole's profile on LinkedIn