首页 > 学习园地 > 英语学习

雅思阅读材料:Eye robot

雕龙文库

【简介】感谢网友“雕龙文库”参与投稿,这里小编给大家分享一些,方便大家学习。

  Poor eyesight remains one of the main obstacles to letting robots loose among humans. But it is improving, in part by aping natural vision

  ROBOTS are getting smarter and more agile all the time. They disarm bombs, fly combat missions, put together complicated machines, even play football. Why, then, one might ask, are they nowhere to be seen, beyond war zones, factories and technology fairs? One reason is that they themselves cannot see very well. And people are understandably wary of purblind contraptions bumping into them willy-nilly in the street or at home.

  All that a camera-equipped computer sees is lots of picture elements, or pixels. A pixel is merely a number reflecting how much light has hit a particular part of a sensor. The challenge has been to devise algorithms that can interpret such numbers as scenes composed of different objects in space. This comes naturally to people and, barring certain optical illusions, takes no time at all as well as precious little conscious effort. Yet emulating this feat in computers has proved tough.

  In natural vision, after an image is formed in the retina it is sent to an area at the back of the brain, called the visual cortex, for processing. The first nerve cells it passes through react only to simple stimuli, such as edges slanting at particular angles. They fire up other cells, further into the visual cortex, which react to simple combinations of edges, such as corners. Cells in each subsequent area discern ever more complex features, with those at the top of the hierarchy responding to general categories like animals and faces, and to entire scenes comprising assorted objects. All this takes less than a tenth of a second.

  The outline of this process has been known for years and in the late 1980s Yann LeCun, now at New York University, pioneered an approach to computer vision that tries to mimic the hierarchical way the visual cortex is wired. He has been tweaking his convolutional neural networks ever since.

  Seeing is believing

  A ConvNet begins by swiping a number of software filters, each several pixels across, over the image, pixel by pixel. Like the brains primary visual cortex, these filters look for simple features such as edges. The upshot is a set of feature maps, one for each filter, showing which patches of the original image contain the sought-after element. A series of transformations is then performed on each map in order to enhance it and improve the contrast. Next, the maps are swiped again, but this time rather than stopping at each pixel, the filter takes a snapshot every few pixels. That produces a new set of maps of lower resolution. These highlight the salient features while reining in computing power. The whole process is then repeated, with several hundred filters probing for more elaborate shapes rather than just a few scouring for simple ones. The resulting array of feature maps is run through one final set of filters. These classify objects into general categories, such as pedestrians or cars.

  Many state-of-the-art computer-vision systems work along similar lines. The uniqueness of ConvNets lies in where they get their filters. Traditionally, these were simply plugged in one by one, in a laborious manual process that required an expert human eye to tell the machine what features to look for, in future, at each level. That made systems which relied on them good at spotting narrow classes of objects but inept at discerning anything else.

  Dr LeCuns artificial visual cortex, by contrast, lights on the appropriate filters automatically as it is taught to distinguish the different types of object. When an image is fed into the unprimed system and processed, the chances are it will not, at first, be assigned to the right category. But, shown the correct answer, the system can work its way back, modifying its own parameters so that the next time it sees a similar image it will respond appropriately. After enough trial runs, typically 10,000 or more, it makes a decent fist of recognising that class of objects in unlabelled images.

  This still requires human input, though. The next stage is unsupervised learning, in which instruction is entirely absent. Instead, the system is shown lots of pictures without being told what they depict. It knows it is on to a promising filter when the output image resembles the input. In a computing sense, resemblance is gauged by the extent to which the input image can be recreated from the lower-resolution output. When it can, the filters the system had used to get there are retained.

  In a tribute to natures nous, the lowest-level filters arrived at in this unaided process are edge-seeking ones, just as in the brain. The top-level filters are sensitive to all manner of complex shapes. Caltech-101, a database routinely used for vision research, consists of some 10,000 standardised images of 101 types of just such complex shapes, including faces, cars and watches. When a ConvNet with unsupervised pre-training is shown the images from this database it can learn to recognise the categories more than 70% of the time. This is just below what top-scoring hand-engineered systems are capable ofand those tend to be much slower.

  This approach need not be confined to computer-vision. In theory, it ought to work for any hierarchical system: language processing, for example. In that case individual sounds would be low-level features akin to edges, whereas the meanings of conversations would correspond to elaborate scenes.

  For now, though, ConvNet has proved its mettle in the visual domain. Google has been using it to blot out faces and licence plates in its Streetview application. It has also come to the attention of DARPA, the research arm of Americas Defence Department. This agency provided Dr LeCun and his team with a small roving robot which, equipped with their system, learned to detect large obstacles from afar and correct its path accordinglya problem that lesser machines often, as it were, trip over. The scooter-sized robot was also rather good at not running into the researchers. In a selfless act of scientific bravery, they strode confidently in front of it as it rode towards them at a brisk walking pace, only to see it stop in its tracks and reverse. Such machines may not quite yet be ready to walk the streets alongside people, but the day they can is surely not far off.

  

  Poor eyesight remains one of the main obstacles to letting robots loose among humans. But it is improving, in part by aping natural vision

  ROBOTS are getting smarter and more agile all the time. They disarm bombs, fly combat missions, put together complicated machines, even play football. Why, then, one might ask, are they nowhere to be seen, beyond war zones, factories and technology fairs? One reason is that they themselves cannot see very well. And people are understandably wary of purblind contraptions bumping into them willy-nilly in the street or at home.

  All that a camera-equipped computer sees is lots of picture elements, or pixels. A pixel is merely a number reflecting how much light has hit a particular part of a sensor. The challenge has been to devise algorithms that can interpret such numbers as scenes composed of different objects in space. This comes naturally to people and, barring certain optical illusions, takes no time at all as well as precious little conscious effort. Yet emulating this feat in computers has proved tough.

  In natural vision, after an image is formed in the retina it is sent to an area at the back of the brain, called the visual cortex, for processing. The first nerve cells it passes through react only to simple stimuli, such as edges slanting at particular angles. They fire up other cells, further into the visual cortex, which react to simple combinations of edges, such as corners. Cells in each subsequent area discern ever more complex features, with those at the top of the hierarchy responding to general categories like animals and faces, and to entire scenes comprising assorted objects. All this takes less than a tenth of a second.

  The outline of this process has been known for years and in the late 1980s Yann LeCun, now at New York University, pioneered an approach to computer vision that tries to mimic the hierarchical way the visual cortex is wired. He has been tweaking his convolutional neural networks ever since.

  Seeing is believing

  A ConvNet begins by swiping a number of software filters, each several pixels across, over the image, pixel by pixel. Like the brains primary visual cortex, these filters look for simple features such as edges. The upshot is a set of feature maps, one for each filter, showing which patches of the original image contain the sought-after element. A series of transformations is then performed on each map in order to enhance it and improve the contrast. Next, the maps are swiped again, but this time rather than stopping at each pixel, the filter takes a snapshot every few pixels. That produces a new set of maps of lower resolution. These highlight the salient features while reining in computing power. The whole process is then repeated, with several hundred filters probing for more elaborate shapes rather than just a few scouring for simple ones. The resulting array of feature maps is run through one final set of filters. These classify objects into general categories, such as pedestrians or cars.

  Many state-of-the-art computer-vision systems work along similar lines. The uniqueness of ConvNets lies in where they get their filters. Traditionally, these were simply plugged in one by one, in a laborious manual process that required an expert human eye to tell the machine what features to look for, in future, at each level. That made systems which relied on them good at spotting narrow classes of objects but inept at discerning anything else.

  Dr LeCuns artificial visual cortex, by contrast, lights on the appropriate filters automatically as it is taught to distinguish the different types of object. When an image is fed into the unprimed system and processed, the chances are it will not, at first, be assigned to the right category. But, shown the correct answer, the system can work its way back, modifying its own parameters so that the next time it sees a similar image it will respond appropriately. After enough trial runs, typically 10,000 or more, it makes a decent fist of recognising that class of objects in unlabelled images.

  This still requires human input, though. The next stage is unsupervised learning, in which instruction is entirely absent. Instead, the system is shown lots of pictures without being told what they depict. It knows it is on to a promising filter when the output image resembles the input. In a computing sense, resemblance is gauged by the extent to which the input image can be recreated from the lower-resolution output. When it can, the filters the system had used to get there are retained.

  In a tribute to natures nous, the lowest-level filters arrived at in this unaided process are edge-seeking ones, just as in the brain. The top-level filters are sensitive to all manner of complex shapes. Caltech-101, a database routinely used for vision research, consists of some 10,000 standardised images of 101 types of just such complex shapes, including faces, cars and watches. When a ConvNet with unsupervised pre-training is shown the images from this database it can learn to recognise the categories more than 70% of the time. This is just below what top-scoring hand-engineered systems are capable ofand those tend to be much slower.

  This approach need not be confined to computer-vision. In theory, it ought to work for any hierarchical system: language processing, for example. In that case individual sounds would be low-level features akin to edges, whereas the meanings of conversations would correspond to elaborate scenes.

  For now, though, ConvNet has proved its mettle in the visual domain. Google has been using it to blot out faces and licence plates in its Streetview application. It has also come to the attention of DARPA, the research arm of Americas Defence Department. This agency provided Dr LeCun and his team with a small roving robot which, equipped with their system, learned to detect large obstacles from afar and correct its path accordinglya problem that lesser machines often, as it were, trip over. The scooter-sized robot was also rather good at not running into the researchers. In a selfless act of scientific bravery, they strode confidently in front of it as it rode towards them at a brisk walking pace, only to see it stop in its tracks and reverse. Such machines may not quite yet be ready to walk the streets alongside people, but the day they can is surely not far off.

  

相关图文

推荐文章

网站地图:栏目 TAGS 范文 作文 文案 学科 百科

信息流广告 周易 易经 代理招生 二手车 网络营销 旅游攻略 非物质文化遗产 查字典 社区团购 精雕图 戏曲下载 抖音代运营 易学网 互联网资讯 成语 成语故事 诗词 工商注册 注册公司 抖音带货 云南旅游网 网络游戏 代理记账 短视频运营 在线题库 国学网 知识产权 抖音运营 雕龙客 雕塑 奇石 散文 自学教程 常用文书 河北生活网 好书推荐 游戏攻略 心理测试 石家庄人才网 考研真题 汉语知识 心理咨询 手游安卓版下载 兴趣爱好 网络知识 十大品牌排行榜 商标交易 单机游戏下载 短视频代运营 宝宝起名 范文网 电商设计 免费发布信息 服装服饰 律师咨询 搜救犬 Chat GPT中文版 经典范文 优质范文 工作总结 二手车估价 实用范文 古诗词 衡水人才网 石家庄点痣 养花 名酒回收 石家庄代理记账 女士发型 搜搜作文 石家庄人才网 钢琴入门指法教程 词典 围棋 chatGPT 读后感 玄机派 企业服务 法律咨询 chatGPT国内版 chatGPT官网 励志名言 河北代理记账公司 文玩 语料库 游戏推荐 男士发型 高考作文 PS修图 儿童文学 买车咨询 工作计划 礼品厂 舟舟培训 IT教程 手机游戏推荐排行榜 暖通,电地暖, 女性健康 苗木供应 ps素材库 短视频培训 优秀个人博客 包装网 创业赚钱 养生 民间借贷律师 绿色软件 安卓手机游戏 手机软件下载 手机游戏下载 单机游戏大全 免费软件下载 石家庄论坛 网赚 手游下载 游戏盒子 职业培训 资格考试 成语大全 英语培训 艺术培训 少儿培训 苗木网 雕塑网 好玩的手机游戏推荐 汉语词典 中国机械网 美文欣赏 红楼梦 道德经 标准件 电地暖 网站转让 鲜花 书包网 英语培训机构 电商运营