- •Contents
- •Practice №1 The study of cloud services Google. Gmail.
- •1.1 About Gmail
- •1.2 Why choose Gmail
- •1.3 Creating an account
- •1.4 Gmail as a Google Account
- •2. Gmail’s Interface
- •2.1 Inbox
- •2.2 Compose Mail
- •2.3 Drafts
- •2.4 Sent Mail
- •2.5 More
- •2.6 Report Spam
- •2.7 Delete
- •2.8 Keyboard Shortcuts
- •3. Organizing your Gmail
- •3.1 Contacts
- •3.2 Stars
- •3.3 Labels
- •4. Advanced Settings
- •4.1 General Settings
- •4.2 Accounts and Import
- •4.3 Filters
- •4.4 Forwarding and Pop/imap
- •4.5 Offline
- •5. The Fun Stuff
- •5.1 Buzz
- •5.2 Chat
- •5.3 Web Clips
- •5.4 Labs
- •5.5 Themes
- •5.6 Gmail Mobile
- •5.7 Google Docs
- •5.8 Google Calendar
- •5.9 Tasks
- •6. Conclusion
- •Practice №2 The study of cloud services Google Talk.
- •2.1 Use the native Gmail Talk option
- •2.2 Installing the voice/video chat plugin
- •Practice №3 The study of cloud services Google Calendar
- •3.1 Interface
- •3.2 Create an event
- •3.3 Add location
- •3.4 Invite people
- •3.5 Share meeting materials
- •3.6 Meet online
- •Invite guests, add attachments, and meet online.
- •3.7 New committee? New (shared) calendar.
- •Practice №4 Editing of electronic documents Google Apps
- •4.1 Creating new files
- •4.2 Using templates
- •Practice №5 The study of functions Google App Engine
- •5.1 Google App Engine Docs
- •5.2 Download the App Engine sdk for php
- •5.3 Creating the Configuration File
- •Practice №6 Creating a data warehouse environment Google App Engine
- •6.1 Setting up Objectify
- •6.2 Creating the data model classes
- •6.3 Adding the greetings and the form to the jsp template
- •6.4 Creating the form handling servlet
- •6.5 Testing the app
- •6.6 Creating required indexes
- •Practice №7 The study of cloud services Google Apps
- •7.1 Gmail
- •Google Drive
- •Google Docs, Sheets, Slides, and Forms
- •7.4 Google Sites
- •7.5 Google Calendar
- •7.6 Google Hangouts
- •7.8 Google Apps Vault
- •7.9 Usage
- •Practice №8 Microsoft Office Live Workspace
- •8.1 Setting up Microsoft Live Workspace
- •8.2 Features Available with Office Live Workspace
- •Practice №9 The study of cloud services Microsoft SkyDrive
- •9.1 Creating a Microsoft account
- •9.2 Getting to know OneDrive
- •9.3 Installing the Microsoft OneDrive app
- •9.4 OneDrive for mobile devices
- •Practice №10 Network services for the mobile user. Wi-Fi technology
- •10.1 What is Wi-Fi ?
- •Practice №11 Search engines in Internet
- •Veronica & Jughead:
- •Improve Your Searching Skills:
- •Infoseek:
- •Inktomi:
- •Vertical Search
- •Verticals Galore!
- •Information Retrieval as a Game of Mind Control
- •Increasing The Rate of Algorithmic Change
- •Practice №12 Search graphic information in Internet. Comparative analysis of search engines. Internet image search
- •Study Guide
- •3 55029, Stavropol, Pushkina, 1
Improve Your Searching Skills:
Want to become a better searcher? Most large scale search engines offer:
Advanced search pages which help searchers refine their queries to request files which are newer or older, local or in nature, from specific domains, published in specific formats, or other ways of refining search, for example the ~ character means related to Google.
Vertical search databases which may help structure the information index or limit the search index to a more trusted or better structured collection of sources, documents, and information.
Nancy Blachman's Google Guide offers searchers free Google search tips, and Greg R.Notess's Search Engine Showdown offers a search engine features chart.
There are also many popular smaller vertical search services. For example, Del.icio.usallows you to search URLs that users have bookmarked, and Technorati allows you to search blogs.
World Wide Web Wanderer:
Soon the web's first robot came. In June 1993 Matthew Gray introduced the World Wide Web Wanderer. He initially wanted to measure the growth of the web and created this bot to count active web servers. He soon upgraded the bot to capture actual URL's. His database became knows as the Wandex.
The Wanderer was as much of a problem as it was a solution because it caused system lag by accessing the same page hundreds of times a day. It did not take long for him to fix this software, but people started to question the value of bots.
ALIWEB:
In October of 1993 Martijn Koster created Archie-Like Indexing of the Web, or ALIWEB in response to the Wanderer. ALIWEB crawled meta information and allowed users to submit their pages they wanted indexed with their own page description. This meant it needed no bot to collect data and was not using excessive bandwidth. The downside of ALIWEB is that many people did not know how to submit their site.
Robots Exclusion Standard:
Martijn Kojer also hosts the web robots page, which created standards for how search engines should index or not index content. This allows webmasters to block bots from their site on a whole site level or page by page basis.
By default, if information is on a public web server, and people link to it search engines generally will index it.
In 2005 Google led a crusade against blog comment spam, creating a nofollow attribute that can be applied at the individual link level. After this was pushed through Google quickly changed the scope of the purpose of the link nofollow to claim it was for any link that was sold or not under editorial control.
Primitive Web Search:
By December of 1993, three full fledged bot fed search engines had surfaced on the web: JumpStation, the World Wide Web Worm, and the Repository-Based Software Engineering (RBSE) spider. JumpStation gathered info about the title and header from Web pages and retrieved these using a simple linear search. As the web grew, JumpStation slowed to a stop. The WWW Worm indexed titles and URL's. The problem with JumpStation and the World Wide Web Worm is that they listed results in the order that they found them, and provided no discrimination. The RSBE spider did implement a ranking system.
Since early search algorithms did not do adequate link analysis or cache full page content if you did not know the exact name of what you were looking for it was extremely hard to find it.
Excite:
Excite came from the project Architext, which was started by in February, 1993 by six Stanford undergrad students. They had the idea of using statistical analysis of word relationships to make searching more efficient. They were soon funded, and in mid 1993 they released copies of their search software for use on web sites.
Excite was bought by a broadband provider named @Home in January, 1999 for $6.5 billion, and was named Excite@Home. In October, 2001 Excite@Home filed for bankruptcy. InfoSpace bought Excite from bankruptcy court for $10 million.
Web Directories:
VLib:
When Tim Berners-Lee set up the web he created the Virtual Library, which became a loose confederation of topical experts maintaining relevant topical link lists.
EINet Galaxy
The EINet Galaxy web directory was born in January of 1994. It was organized similar to how web directories are today. The biggest reason the EINet Galaxy became a success was that it also contained Gopher and Telnet search features in addition to its web search feature. The web size in early 1994 did not really require a web directory; however, other directories soon did follow.
Yahoo! Directory
In April 1994 David Filo and Jerry Yang created the Yahoo! Directory as a collection of their favorite web pages. As their number of links grew they had to reorganize and become a searchable directory. What set the directories above The Wanderer is that they provided a human compiled description with each URL. As time passed and the Yahoo! Directory grew Yahoo! began charging commercial sites for inclusion. As time passed the inclusion rates for listing a commercial site increased. The current cost is $299 per year. Many informational sites are still added to the Yahoo! Directory for free.
On September 26, 2014, Yahoo! announced they would close the Yahoo! Directory at the end of 2014, though it was transitioned to being part of Yahoo! Small Business and remained online at business.yahoo.com.
Open Directory Project
In 1998 Rich Skrenta and a small group of friends created the Open Directory Project, which is a directory which anybody can download and use in whole or part. The ODP (also known as DMOZ) is the largest internet directory, almost entirely ran by a group of volunteer editors. The Open Directory Project was grown out of frustration webmasters faced waiting to be included in the Yahoo! Directory. Netscape bought the Open Directory Project in November, 1998. Later that same month AOL announced the intention of buying Netscape in a $4.5 billion all stock deal.
LII
Google offers a librarian newsletter to help librarians and other web editors help make information more accessible and categorize the web. The second Google librarian newsletter came from Karen G. Schneider, who is the director of Librarians' Internet Index. LII is a high quality directory aimed at librarians. Her article explains what she and her staff look for when looking for quality credible resources to add to the LII. Most other directories, especially those which have a paid inclusion option, hold lower standards than selected limited catalogs created by librarians.
The Internet Public Library is another well kept directory of websites.
Business.com
Due to the time intensive nature of running a directory, and the general lack of scalability of a business model the quality and size of directories sharply drops off after you get past the first half dozen or so general directories. There are also numerous smaller industry, vertically, or locally oriented directories. Business.com, for example, is a directory of business websites.
Looksmart
Looksmart was founded in 1995. They competed with the Yahoo! Directory by frequently increasing their inclusion rates back and forth. In 2002 Looksmart transitioned into a pay per click provider, which charged listed sites a flat fee per click. That caused the demise of any good faith or loyalty they had built up, although it allowed them to profit by syndicating those paid listings to some major portals like MSN. The problem was that Looksmart became too dependant on MSN, and in 2003, when Microsoft announced they were dumping Looksmart that basically killed their business model.
In March of 2002, Looksmart bought a search engine by the name of WiseNut, but it never gained traction. Looksmart also owns a catalog of content articles organized in vertical sites, but due to limited relevancy Looksmart has lost most (if not all) of their momentum. In 1998 Looksmart tried to expand their directory by buying the non commercial Zeal directory for $20 million, but on March 28, 2006 Looksmart shut down the Zeal directory, and hope to drive traffic using Furl, a social bookmarking program.
Search Engines vs Directories:
All major search engines have some limited editorial review process, but the bulk of relevancy at major search engines is driven by automated search algorithms which harness the power of the link graph on the web. In fact, some algorithms, such asTrustRank, bias the web graph toward trusted seed sites without requiring a search engine to take on much of an editorial review staff. Thus, some of the more elegant search engines allow those who link to other sites to in essence vote with their links as the editorial reviewers.
Unlike highly automated search engines, directories are manually compiled taxonomies of websites. Directories are far more cost and time intensive to maintain due to their lack of scalability and the necessary human input to create each listing and periodically check the quality of the listed websites.
General directories are largely giving way to expert vertical directories, temporal news sites (like blogs), and social bookmarking sites (like del.ici.ous). In addition, each of those three publishing formats I just mentioned also aid in improving the relevancy of major search engines, which further cuts at the need for (and profitability of) general directories.
Here is a great background video on the history of search.
WebCrawler:
Brian Pinkerton of the University of Washington released WebCrawler on April 20, 1994. It was the first crawler which indexed entire pages. Soon it became so popular that during daytime hours it could not be used. AOL eventually purchased WebCrawler and ran it on their network. Then in 1997, Excite bought out WebCrawler, and AOL began using Excite to power its NetFind. WebCrawler opened the door for many other services to follow suit. Within 1 year of its debuted came Lycos, Infoseek, and OpenText.
Lycos:
Lycos was the next major search development, having been design at Carnegie Mellon University around July of 1994. Michale Mauldin was responsible for this search engine and remains to be the chief scientist at Lycos Inc.
On July 20, 1994, Lycos went public with a catalog of 54,000 documents. In addition to providing ranked relevance retrieval, Lycos provided prefix matching and word proximity bonuses. But Lycos' main difference was the sheer size of its catalog: by August 1994, Lycos had identified 394,000 documents; by January 1995, the catalog had reached 1.5 million documents; and by November 1996, Lycos had indexed over 60 million documents -- more than any other Web search engine. In October 1994, Lycos ranked first on Netscape's list of search engines by finding the most hits on the word ‘surf.’.
