Reading #1 -- 'No Place To Hide' - http://www.noplacetohide.net/chapter.html
Wow. Kind of makes you want to stop using your credit cards and patronizing stores. Why are there so many different companies manufacturing these RFID chips and who is gathering all that information, and for what purpose??
Nefarious? Good? Why do they need to know where we shop, what we eat, read, and what brand of dog food we purchase? How will this information help National Security, or our Country?
If I knew what it was that they wanted, and why, it might make a difference on how some people deal with this new technology.
"A couple of organizations, including a federation of research universities, are working on a standard that would enable every manufactured item in the world to be given a unique ID, at least theoretically... Researchers discount as shrill the criticism and focus instead on the enormous potential for improving logistics and customer convenience... The tags, embedded in shoes or luggage or the seams of trousers - officials are contemplating embedding them in airline tickets - might be just the thing for aviation or building security. Or for the intelligence officials who believe that some form of Total Information Awareness will make us safer. Once again, marketers would be leading the way."
So, how do we avoid being "tagged", or "ID'd"? Should we worry that it will not make us safer? Or should we not worry about it, being the good, solid, law abiding citizens, and go about out business?
Reading #2 -- TIA - http://www.epic.org/privacy/profiling/tia/
I consider myself to be a good citizen. I pay my taxes, mortgage, try to stay out of debt, don't carry weapons around, I do need to slow down more while driving, but, I still don't worry that the police need to worry with me, they have other things to do.
Why are there so many lawmakers who are so opposed to these kinds of technology? How many times lately have we seen Congressmen, and other elected representatives in the news for illegal activities? How does this happen? They should be upholding and sustaining the very laws they are breaking.
Who monitors the agencies gathering the information?
Just a thought.
Reading #3 -- No longer there.
Muddies Point: Just wondering what will happen when bad 'hackers' break into our government's computer system, and what will they do with all that information that's been collected? Fraud is a major problem now. If information like this is stored, how will my family be protected?
Comments: I commented on Lori's blog: https://www.blogger.com/comment.g?blogID=6958200230416907745&postID=6203203639829771187
Alison's blog: https://www.blogger.com/comment.g?blogID=8349965223663731455&postID=1914246662763828702&page=0
Corrine's blog: https://www.blogger.com/comment.g?blogID=5477147704203276697&postID=566039529536103427
Friday, November 28, 2008
Friday, November 14, 2008
Week 11 Readings
Reading 1) D-Lib Magazine July/August 2005
Digital Libraries
So, Digital Libraries. What can one say about them.
They are everywhere. "Federal programmatic support for digital library research was formulated in a series of community-based planning workshops sponsored by the National Science Foundation (NSF) in 1993-1994." I had no idea they were even around until a few years ago.
Luckily, there were several grants given and many larger universities were working on varied digitization projects. "Some of the work led to significant technology transfer and spinoffs (e.g., Google grew out of research performed under the Stanford DLI-1 project). An international collaboration by Cornell and the UK ePrint project, under DLI-2, contributed to the development and adaptation of the Open Archives Initiative for Metadata Harvesting (OAI-PMH) specifications and protocols."
We've been talking about these browsers and technological breakthroughs lately. The last few years have seen huge advances being made in this field. Some of those are: Elsevier publications that Dr. He enjoys. They remember his favorite kind of articles and send them to him when published. How cool is that! Just like having your library send you your favorite authors books as soon as they are out, w/out having to fill out a reserve slip.
W3C (XML, XSLT), we've been looking at lately.
Also Web search engines such as Google Scholar, Google Print, and Yahoo).
Reading 2) D-Lib Magazine July/August 2005
Dewey Meets Turing Librarians, Computer Scientists, and the Digital Libraries Initiative
Digital Libraries Initiative (DLI), began in 1994. This idea had librarians and computer scientists getting together to discuss digital libraries. With the onset of the internet, things got a little dicey. Publishers wanted to make money from the internet as well. They set deals with Universities to have their works made available to them, for a price, of course.
Luckily for librarians, where there is information wanted, needed, or stored, there must be people to obtain, share, disburse and maintain said information.
"The accomplishments of the Digital Libraries Initiative and many related activities external to its work have broadened opportunities for library science, rather than marginalizing the field." With cooperation many ideas and books can be shared, in person, or digitally.
Reading 3) ARL: A Bimonthly Report, no. 226 (February 2003)
Institutional Repositories: Essential Infrastructure for Scholarship in the Digital Age
I'm looking forward to Jongdo's lecture next week to hear him explain this subject hopefully a little clearer. Mr Lynch tells us that "The development of institutional repositories emerged as a new strategy that allows universities to apply serious, systematic leverage to accelerate changes taking place in scholarship and scholarly communication, both moving beyond their historic relatively passive role of supporting established publishers in modernizing scholarly publishing through the licensing of digital content, and also scaling up beyond ad-hoc alliances, partnerships, and support arrangements with a few select faculty pioneers exploring more transformative new uses of the digital medium."
How many repositories are there? I'm not really clear if there are qualifications stated, or just suggestions for repositories. It sounds as if only higher education, or universities can be labeled a repository.
A repository may be used to preserve information; Manage the 'rights for digital materials'; and "facilitate access, reuse, and stewardship of content."
Muddiest Point: Dublin Core has again been mentioned. I thought that is was "a nice idea, or theory" but it didn't exist yet. Now I'm confused again.
(I know, it doesn't take much)
I commented on Evelyn's blog: http://emc2-technologychat.blogspot.com/
Anthony's blog: http://arklibraryscientist.blogspot.com/ and
Adrien's blog: http://www.azucchino.blogspot.com/
Digital Libraries
So, Digital Libraries. What can one say about them.
They are everywhere. "Federal programmatic support for digital library research was formulated in a series of community-based planning workshops sponsored by the National Science Foundation (NSF) in 1993-1994." I had no idea they were even around until a few years ago.
Luckily, there were several grants given and many larger universities were working on varied digitization projects. "Some of the work led to significant technology transfer and spinoffs (e.g., Google grew out of research performed under the Stanford DLI-1 project). An international collaboration by Cornell and the UK ePrint project, under DLI-2, contributed to the development and adaptation of the Open Archives Initiative for Metadata Harvesting (OAI-PMH) specifications and protocols."
We've been talking about these browsers and technological breakthroughs lately. The last few years have seen huge advances being made in this field. Some of those are: Elsevier publications that Dr. He enjoys. They remember his favorite kind of articles and send them to him when published. How cool is that! Just like having your library send you your favorite authors books as soon as they are out, w/out having to fill out a reserve slip.
W3C (XML, XSLT), we've been looking at lately.
Also Web search engines such as Google Scholar, Google Print, and Yahoo).
Reading 2) D-Lib Magazine July/August 2005
Dewey Meets Turing Librarians, Computer Scientists, and the Digital Libraries Initiative
Digital Libraries Initiative (DLI), began in 1994. This idea had librarians and computer scientists getting together to discuss digital libraries. With the onset of the internet, things got a little dicey. Publishers wanted to make money from the internet as well. They set deals with Universities to have their works made available to them, for a price, of course.
Luckily for librarians, where there is information wanted, needed, or stored, there must be people to obtain, share, disburse and maintain said information.
"The accomplishments of the Digital Libraries Initiative and many related activities external to its work have broadened opportunities for library science, rather than marginalizing the field." With cooperation many ideas and books can be shared, in person, or digitally.
Reading 3) ARL: A Bimonthly Report, no. 226 (February 2003)
Institutional Repositories: Essential Infrastructure for Scholarship in the Digital Age
I'm looking forward to Jongdo's lecture next week to hear him explain this subject hopefully a little clearer. Mr Lynch tells us that "The development of institutional repositories emerged as a new strategy that allows universities to apply serious, systematic leverage to accelerate changes taking place in scholarship and scholarly communication, both moving beyond their historic relatively passive role of supporting established publishers in modernizing scholarly publishing through the licensing of digital content, and also scaling up beyond ad-hoc alliances, partnerships, and support arrangements with a few select faculty pioneers exploring more transformative new uses of the digital medium."
How many repositories are there? I'm not really clear if there are qualifications stated, or just suggestions for repositories. It sounds as if only higher education, or universities can be labeled a repository.
A repository may be used to preserve information; Manage the 'rights for digital materials'; and "facilitate access, reuse, and stewardship of content."
Muddiest Point: Dublin Core has again been mentioned. I thought that is was "a nice idea, or theory" but it didn't exist yet. Now I'm confused again.
(I know, it doesn't take much)
I commented on Evelyn's blog: http://emc2-technologychat.blogspot.com/
Anthony's blog: http://arklibraryscientist.blogspot.com/ and
Adrien's blog: http://www.azucchino.blogspot.com/
Saturday, November 8, 2008
Thursday, November 6, 2008
Week 10 Readings
This weeks readings were much easier to assimilate than that of XML and DTP and all that.
Web Search Engines --- Part 1 and Part 2
The major search engines were mentioned. That of Google, Yahoo and Microsoft.
Search engines cannot and should not index every page on the web. One thing that was interesting was "search engines must reject as much low-value automated content as possible."
Who decides what is low-value or not?
I'm guessing that the web crawler machines decide based in how many visitors a website gets.
There are hundreds of distributed web crawler machines going about their business daily, hourly, minute by minute. They communicate with other machines and with millions of different web servers constantly.
There are two phases to indexing algorithms. First phase is scanning. The indexer looks at the text of each input document, giving it a number and assigning it to a temporary file.
The second phase is inversion. The indexer sorts the temporary files and gives it a number as well. "A temporary file might contain 10 trillion entries."
There is some caching of information done as well.
"Current Developments and future trends for the OAI Protocol for Metadata Harvesting"
Well, this was interesting. If you know what they are talking about. OPEN ARCHIVES INITIATIVE = OAI. This basically began in 2001 with a grant from the Mellon foundation. there are several companies and universities that are excited about this topic. Some companies are building a "virtual collection" of sheet music. It can be looked at, copied, and annotated in this digitized manner.
Some shortcomings with OAI is "there is no "there is no search mechanism and fairly limited browsing capabilities. " Also that "few of the registries approach a complete list of all available repositories. "
while reading articles such as this one, I think, "why in the world would I ever have to know this information." then usually, Dr. He will give us an assignment that invariably makes us use some of the information we've recently read about. I'm really hoping that this is just so we know what's out there and we won't have to actually use this at this point.
The Deep Web: Surfacing Hidden Value - Michal K. Bergman
"Searching on the Internet today can be compared to dragging a net across the surface of the ocean. While a great deal may be caught in the net, there is still a wealth of information that is deep, and therefore, missed. The reason is simple: Most of the Web's information is buried far down on dynamically generated sites, and standard search engines never find it."
Apparently the Deep Web is huge and "is the largest growing category of new information on the Internet." I have no idea if anything I have done on the internet is stored in the Deep Web or not. Is so, it is purely unintentional.
Search Engines such as Excite, yahoo and others are only catching the surface web, which is only a tiny portion of the information available. Thusly, "According to a recent survey of search-engine satisfaction by market-researcher NPD, search failure rates have increased steadily since 1997."
Figure 6. Distribution of Deep Web Sites by Content Type
"More than half of all deep Web sites feature topical databases. Topical databases plus large internal site documents and archived publications make up nearly 80% of all deep Web sites. Purchase-transaction sites — including true shopping sites with auctions and classifieds — account for another 10% or so of sites. The other eight categories collectively account for the remaining 10% or so of sites."
Hopefully there will be search engines that will have the capability to retrieve information from Deep Web so there will more information available to choose from for the student sitting at their computer, the mom helping her child with homework, or the librarian trying to help a patron with a question.
Muddiest Point: How do you get information stored in the "Deep Web" and how do you get it out again?
www.blogger.com/comment.g?blogID=6958200230416907745&postID=404487879365965148"
Comments: I responded to Rebekah's question on the disc. board - https://courseweb.pitt.edu/webapps/portal/frameset.jsp?tab_id=_2_1&url=%2Fwebapps%2Fblackboard%2Fexecute%2Flauncher%3Ftype%3DCourse%26id%3D_9047_1%26url%3D
I commented on Lori's blog : https://https://www.blogger.com/comment.g?blogID=6958200230416907745&postID=404487879365965148
Also commented on Allison's blog: http://ab2600.blogspot.com/feeds/posts/default
Web Search Engines --- Part 1 and Part 2
The major search engines were mentioned. That of Google, Yahoo and Microsoft.
Search engines cannot and should not index every page on the web. One thing that was interesting was "search engines must reject as much low-value automated content as possible."
Who decides what is low-value or not?
I'm guessing that the web crawler machines decide based in how many visitors a website gets.
There are hundreds of distributed web crawler machines going about their business daily, hourly, minute by minute. They communicate with other machines and with millions of different web servers constantly.
There are two phases to indexing algorithms. First phase is scanning. The indexer looks at the text of each input document, giving it a number and assigning it to a temporary file.
The second phase is inversion. The indexer sorts the temporary files and gives it a number as well. "A temporary file might contain 10 trillion entries."
There is some caching of information done as well.
"Current Developments and future trends for the OAI Protocol for Metadata Harvesting"
Well, this was interesting. If you know what they are talking about. OPEN ARCHIVES INITIATIVE = OAI. This basically began in 2001 with a grant from the Mellon foundation. there are several companies and universities that are excited about this topic. Some companies are building a "virtual collection" of sheet music. It can be looked at, copied, and annotated in this digitized manner.
Some shortcomings with OAI is "there is no "there is no search mechanism and fairly limited browsing capabilities. " Also that "few of the registries approach a complete list of all available repositories. "
while reading articles such as this one, I think, "why in the world would I ever have to know this information." then usually, Dr. He will give us an assignment that invariably makes us use some of the information we've recently read about. I'm really hoping that this is just so we know what's out there and we won't have to actually use this at this point.
The Deep Web: Surfacing Hidden Value - Michal K. Bergman
"Searching on the Internet today can be compared to dragging a net across the surface of the ocean. While a great deal may be caught in the net, there is still a wealth of information that is deep, and therefore, missed. The reason is simple: Most of the Web's information is buried far down on dynamically generated sites, and standard search engines never find it."
Apparently the Deep Web is huge and "is the largest growing category of new information on the Internet." I have no idea if anything I have done on the internet is stored in the Deep Web or not. Is so, it is purely unintentional.
Search Engines such as Excite, yahoo and others are only catching the surface web, which is only a tiny portion of the information available. Thusly, "According to a recent survey of search-engine satisfaction by market-researcher NPD, search failure rates have increased steadily since 1997."
Figure 6. Distribution of Deep Web Sites by Content Type
"More than half of all deep Web sites feature topical databases. Topical databases plus large internal site documents and archived publications make up nearly 80% of all deep Web sites. Purchase-transaction sites — including true shopping sites with auctions and classifieds — account for another 10% or so of sites. The other eight categories collectively account for the remaining 10% or so of sites."
Hopefully there will be search engines that will have the capability to retrieve information from Deep Web so there will more information available to choose from for the student sitting at their computer, the mom helping her child with homework, or the librarian trying to help a patron with a question.
Muddiest Point: How do you get information stored in the "Deep Web" and how do you get it out again?
www.blogger.com/comment.g?blogID=6958200230416907745&postID=404487879365965148"
Comments: I responded to Rebekah's question on the disc. board - https://courseweb.pitt.edu/webapps/portal/frameset.jsp?tab_id=_2_1&url=%2Fwebapps%2Fblackboard%2Fexecute%2Flauncher%3Ftype%3DCourse%26id%3D_9047_1%26url%3D
I commented on Lori's blog : https://https://www.blogger.com/comment.g?blogID=6958200230416907745&postID=404487879365965148
Also commented on Allison's blog: http://ab2600.blogspot.com/feeds/posts/default
Subscribe to:
Posts (Atom)