首页 deep learning nature

deep learning nature

举报
开通vip

deep learning nature1FacebookAIResearch,770Broadway,NewYork,NewYork10003USA.2NewYorkUniversity,715Broadway,NewYork,NewYork10003,USA.3DepartmentofComputerScienceandOperationsResearchUniversitédeMontréal,PavillonAndré-Aisenstadt,POBox6128Centre-VilleSTNMontr&ea...

1FacebookAIResearch,770Broadway,NewYork,NewYork10003USA.2NewYorkUniversity,715Broadway,NewYork,NewYork10003,USA.3DepartmentofComputerScienceandOperationsResearchUniversitédeMontréal,PavillonAndré-Aisenstadt,POBox6128Centre-VilleSTNMontréal,QuebecH3C3J7,Canada.4Google,1600AmphitheatreParkway,MountainView,California94043,USA.5DepartmentofComputerScience,UniversityofToronto,6King’sCollegeRoad,Toronto,OntarioM5S3G4,Canada.Machine-learningtechnologypowersmanyaspectsofmodernsociety:fromwebsearchestocontentfilteringonsocialnet-workstorecommendationsone-commercewebsites,anditisincreasinglypresentinconsumerproductssuchascamerasandsmartphones.Machine-learningsystemsareusedtoidentifyobjectsinimages,transcribespeechintotext,matchnewsitems,postsorproductswithusers’interests,andselectrelevantresultsofsearch.Increasingly,theseapplicationsmakeuseofaclassoftechniquescalleddeeplearning.Conventionalmachine-learningtechniqueswerelimitedintheirabilitytoprocessnaturaldataintheirrawform.Fordecades,con-structingapattern-recognitionormachine-learningsystemrequiredcarefulengineeringandconsiderabledomainexpertisetodesignafea-tureextractorthattransformedtherawdata(suchasthepixelvaluesofanimage)intoasuitableinternalrepresentationorfeaturevectorfromwhichthelearningsubsystem,oftenaclassifier,coulddetectorclassifypatternsintheinput.Representationlearningisasetofmethodsthatallowsamachinetobefedwithrawdataandtoautomaticallydiscovertherepresentationsneededfordetectionorclassification.Deep-learningmethodsarerepresentation-learningmethodswithmultiplelevelsofrepresenta-tion,obtainedbycomposingsimplebutnon-linearmodulesthateachtransformtherepresentationatonelevel(startingwiththerawinput)intoarepresentationatahigher,slightlymoreabstractlevel.Withthecompositionofenoughsuchtransformations,verycomplexfunctionscanbelearned.Forclassificationtasks,higherlayersofrepresentationamplifyaspectsoftheinputthatareimportantfordiscriminationandsuppressirrelevantvariations.Animage,forexample,comesintheformofanarrayofpixelvalues,andthelearnedfeaturesinthefirstlayerofrepresentationtypicallyrepresentthepresenceorabsenceofedgesatparticularorientationsandlocationsintheimage.Thesecondlayertypicallydetectsmotifsbyspottingparticulararrangementsofedges,regardlessofsmallvariationsintheedgepositions.Thethirdlayermayassemblemotifsintolargercombinationsthatcorrespondtopartsoffamiliarobjects,andsubsequentlayerswoulddetectobjectsascombinationsoftheseparts.Thekeyaspectofdeeplearningisthattheselayersoffeaturesarenotdesignedbyhumanengineers:theyarelearnedfromdatausingageneral-purposelearningprocedure.Deeplearningismakingmajoradvancesinsolvingproblemsthathaveresistedthebestattemptsoftheartificialintelligencecommu-nityformanyyears.Ithasturnedouttobeverygoodatdiscoveringintricatestructuresinhigh-dimensionaldataandisthereforeapplica-bletomanydomainsofscience,businessandgovernment.Inadditiontobeatingrecordsinimagerecognition1–4andspeechrecognition5–7,ithasbeatenothermachine-learningtechniquesatpredictingtheactiv-ityofpotentialdrugmolecules8,analysingparticleacceleratordata9,10,reconstructingbraincircuits11,andpredictingtheeffectsofmutationsinnon-codingDNAongeneexpressionanddisease12,13.Perhapsmoresurprisingly,deeplearninghasproducedextremelypromisingresultsforvarioustasksinnaturallanguageunderstanding14,particularlytopicclassification,sentimentanalysis,questionanswering15andlan-guagetranslation16,17.Wethinkthatdeeplearningwillhavemanymoresuccessesinthenearfuturebecauseitrequiresverylittleengineeringbyhand,soitcaneasilytakeadvantageofincreasesintheamountofavailablecom-putationanddata.Newlearningalgorithmsandarchitecturesthatarecurrentlybeingdevelopedfordeepneuralnetworkswillonlyacceler-atethisprogress.SupervisedlearningThemostcommonformofmachinelearning,deepornot,issuper-visedlearning.Imaginethatwewanttobuildasystemthatcanclassifyimagesascontaining,say,ahouse,acar,apersonorapet.Wefirstcollectalargedatasetofimagesofhouses,cars,peopleandpets,eachlabelledwithitscategory.Duringtraining,themachineisshownanimageandproducesanoutputintheformofavectorofscores,oneforeachcategory.Wewantthedesiredcategorytohavethehighestscoreofallcategories,butthisisunlikelytohappenbeforetraining.Wecomputeanobjectivefunctionthatmeasurestheerror(ordis-tance)betweentheoutputscoresandthedesiredpatternofscores.Themachinethenmodifiesitsinternaladjustableparameterstoreducethiserror.Theseadjustableparameters,oftencalledweights,arerealnumbersthatcanbeseenas‘knobs’thatdefinetheinput–outputfunc-tionofthemachine.Inatypicaldeep-learningsystem,theremaybehundredsofmillionsoftheseadjustableweights,andhundredsofmillionsoflabelledexampleswithwhichtotrainthemachine.Toproperlyadjusttheweightvector,thelearningalgorithmcom-putesagradientvectorthat,foreachweight,indicatesbywhatamounttheerrorwouldincreaseordecreaseiftheweightwereincreasedbyatinyamount.Theweightvectoristhenadjustedintheoppositedirec-tiontothegradientvector.Theobjectivefunction,averagedoverallthetrainingexamples,canDeeplearningallowscomputationalmodelsthatarecomposedofmultipleprocessinglayerstolearnrepresentationsofdatawithmultiplelevelsofabstraction.Thesemethodshavedramaticallyimprovedthestate-of-the-artinspeechrec-ognition,visualobjectrecognition,objectdetectionandmanyotherdomainssuchasdrugdiscoveryandgenomics.Deeplearningdiscoversintricatestructureinlargedatasetsbyusingthebackpropagationalgorithmtoindicatehowamachineshouldchangeitsinternalparametersthatareusedtocomputetherepresentationineachlayerfromtherepresentationinthepreviouslayer.Deepconvolutionalnetshavebroughtaboutbreakthroughsinprocessingimages,video,speechandaudio,whereasrecurrentnetshaveshonelightonsequentialdatasuchastextandspeech.DeeplearningYannLeCun1,2,YoshuaBengio3&GeoffreyHinton4,5436|NATURE|VOL521|28MAY2015REVIEWdoi:10.1038/nature14539©2015MacmillanPublishersLimited.Allrightsreservedbeseenasakindofhillylandscapeinthehigh-dimensionalspaceofweightvalues.Thenegativegradientvectorindicatesthedirectionofsteepestdescentinthislandscape,takingitclosertoaminimum,wheretheoutputerrorislowonaverage.Inpractice,mostpractitionersuseaprocedurecalledstochasticgradientdescent(SGD).Thisconsistsofshowingtheinputvectorforafewexamples,computingtheoutputsandtheerrors,computingtheaveragegradientforthoseexamples,andadjustingtheweightsaccordingly.Theprocessisrepeatedformanysmallsetsofexamplesfromthetrainingsetuntiltheaverageoftheobjectivefunctionstopsdecreasing.Itiscalledstochasticbecauseeachsmallsetofexamplesgivesanoisyestimateoftheaveragegradientoverallexamples.Thissimpleprocedureusuallyfindsagoodsetofweightssurprisinglyquicklywhencomparedwithfarmoreelaborateoptimizationtech-niques18.Aftertraining,theperformanceofthesystemismeasuredonadifferentsetofexamplescalledatestset.Thisservestotestthegeneralizationabilityofthemachine—itsabilitytoproducesensibleanswersonnewinputsthatithasneverseenduringtraining.Manyofthecurrentpracticalapplicationsofmachinelearninguselinearclassifiersontopofhand-engineeredfeatures.Atwo-classlinearclassifiercomputesaweightedsumofthefeaturevectorcomponents.Iftheweightedsumisaboveathreshold,theinputisclassifiedasbelongingtoaparticularcategory.Sincethe1960swehaveknownthatlinearclassifierscanonlycarvetheirinputspaceintoverysimpleregions,namelyhalf-spacessepa-ratedbyahyperplane19.Butproblemssuchasimageandspeechrecog-nitionrequiretheinput–outputfunctiontobeinsensitivetoirrelevantvariationsoftheinput,suchasvariationsinposition,orientationorilluminationofanobject,orvariationsinthepitchoraccentofspeech,whilebeingverysensitivetoparticularminutevariations(forexample,thedifferencebetweenawhitewolfandabreedofwolf-likewhitedogcalledaSamoyed).Atthepixellevel,imagesoftwoSamoyedsindifferentposesandindifferentenvironmentsmaybeverydifferentfromeachother,whereastwoimagesofaSamoyedandawolfinthesamepositionandonsimilarbackgroundsmaybeverysimilartoeachother.Alinearclassifier,oranyother‘shallow’classifieroperatingonFigure1|Multilayerneuralnetworksandbackpropagation.a,Amulti-layerneuralnetwork(shownbytheconnecteddots)candistorttheinputspacetomaketheclassesofdata(examplesofwhichareontheredandbluelines)linearlyseparable.Notehowaregulargrid(shownontheleft)ininputspaceisalsotransformed(showninthemiddlepanel)byhiddenunits.Thisisanillustrativeexamplewithonlytwoinputunits,twohiddenunitsandoneoutputunit,butthenetworksusedforobjectrecognitionornaturallanguageprocessingcontaintensorhundredsofthousandsofunits.ReproducedwithpermissionfromC.Olah(http://colah.github.io/).b,Thechainruleofderivativestellsushowtwosmalleffects(thatofasmallchangeofxony,andthatofyonz)arecomposed.AsmallchangeΔxinxgetstransformedfirstintoasmallchangeΔyinybygettingmultipliedby∂y/∂x(thatis,thedefinitionofpartialderivative).Similarly,thechangeΔycreatesachangeΔzinz.Substitutingoneequationintotheothergivesthechainruleofderivatives—howΔxgetsturnedintoΔzthroughmultiplicationbytheproductof∂y/∂xand∂z/∂x.Italsoworkswhenx,yandzarevectors(andthederivativesareJacobianmatrices).c,Theequationsusedforcomputingtheforwardpassinaneuralnetwithtwohiddenlayersandoneoutputlayer,eachconstitutingamodulethroughwhichonecanbackpropagategradients.Ateachlayer,wefirstcomputethetotalinputztoeachunit,whichisaweightedsumoftheoutputsoftheunitsinthelayerbelow.Thenanon-linearfunctionf(.)isappliedtoztogettheoutputoftheunit.Forsimplicity,wehaveomittedbiasterms.Thenon-linearfunctionsusedinneuralnetworksincludetherectifiedlinearunit(ReLU)f(z)=max(0,z),commonlyusedinrecentyears,aswellasthemoreconventionalsigmoids,suchasthehyberbolictangent,f(z)=(exp(z)−exp(−z))/(exp(z)+exp(−z))andlogisticfunctionlogistic,f(z)=1/(1+exp(−z)).d,Theequationsusedforcomputingthebackwardpass.Ateachhiddenlayerwecomputetheerrorderivativewithrespecttotheoutputofeachunit,whichisaweightedsumoftheerrorderivativeswithrespecttothetotalinputstotheunitsinthelayerabove.Wethenconverttheerrorderivativewithrespecttotheoutputintotheerrorderivativewithrespecttotheinputbymultiplyingitbythegradientoff(z).Attheoutputlayer,theerrorderivativewithrespecttotheoutputofaunitiscomputedbydifferentiatingthecostfunction.Thisgivesyl−tlifthecostfunctionforunitlis0.5(yl−tl)2,wheretlisthetargetvalue.Oncethe∂E/∂zkisknown,theerror-derivativefortheweightwjkontheconnectionfromunitjinthelayerbelowisjustyj∂E/∂zk.Input(2)Output(1sigmoid)Hidden(2sigmoid)abdcyyxyx=yzxyzyzzy=ΔΔΔΔΔΔzyzxyx=xzyzxxy=CompareoutputswithcorrectanswertogeterrorderivativesjkEyl=yltlEzl=EylylzllEyj=wjkEzkEzj=EyjyjzjEyk=wklEzlEzk=Eykykzkwklwjkwijijkyl=f(zl)zl=wklyklyj=f(zj)zj=wijxiyk=f(zk)zk=wjkyjOutputunitsInputunitsHiddenunitsH2HiddenunitsH1wklwjkwijkH2kH2IoutjH1iInputi28MAY2015|VOL521|NATURE|437REVIEWINSIGHT©2015MacmillanPublishersLimited.Allrightsreservedrawpixelscouldnotpossiblydistinguishthelattertwo,whileputtingtheformertwointhesamecategory.Thisiswhyshallowclassifiersrequireagoodfeatureextractorthatsolvestheselectivity–invariancedilemma—onethatproducesrepresentationsthatareselectivetotheaspectsoftheimagethatareimportantfordiscrimination,butthatareinvarianttoirrelevantaspectssuchastheposeoftheanimal.Tomakeclassifiersmorepowerful,onecanusegenericnon-linearfeatures,aswithkernelmethods20,butgenericfeaturessuchasthosearisingwiththeGaussiankerneldonotallowthelearnertogeneral-izewellfarfromthetrainingexamples21.Theconventionaloptionistohanddesigngoodfeatureextractors,whichrequiresaconsider-ableamountofengineeringskillanddomainexpertise.Butthiscanallbeavoidedifgoodfeaturescanbelearnedautomaticallyusingageneral-purposelearningprocedure.Thisisthekeyadvantageofdeeplearning.Adeep-learningarchitectureisamultilayerstackofsimplemod-ules,all(ormost)ofwhicharesubjecttolearning,andmanyofwhichcomputenon-linearinput–outputmappings.Eachmoduleinthestacktransformsitsinputtoincreaseboththeselectivityandtheinvarianceoftherepresentation.Withmultiplenon-linearlayers,sayadepthof5to20,asystemcanimplementextremelyintricatefunc-tionsofitsinputsthataresimultaneouslysensitivetominutedetails—distinguishingSamoyedsfromwhitewolves—andinsensitivetolargeirrelevantvariationssuchasthebackground,pose,lightingandsurroundingobjects.BackpropagationtotrainmultilayerarchitecturesFromtheearliestdaysofpatternrecognition22,23,theaimofresearch-ershasbeentoreplacehand-engineeredfeatureswithtrainablemultilayernetworks,butdespiteitssimplicity,thesolutionwasnotwidelyunderstooduntilthemid1980s.Asitturnsout,multilayerarchitecturescanbetrainedbysimplestochasticgradientdescent.Aslongasthemodulesarerelativelysmoothfunctionsoftheirinputsandoftheirinternalweights,onecancomputegradientsusingthebackpropagationprocedure.Theideathatthiscouldbedone,andthatitworked,wasdiscoveredindependentlybyseveraldifferentgroupsduringthe1970sand1980s24–27.Thebackpropagationproceduretocomputethegradientofanobjectivefunctionwithrespecttotheweightsofamultilayerstackofmodulesisnothingmorethanapracticalapplicationofthechainruleforderivatives.Thekeyinsightisthatthederivative(orgradi-ent)oftheobjectivewithrespecttotheinputofamodulecanbecomputedbyworkingbackwardsfromthegradientwithrespecttotheoutputofthatmodule(ortheinputofthesubsequentmodule)(Fig. 1).Thebackpropagationequationcanbeappliedrepeatedlytopropagategradientsthroughallmodules,startingfromtheoutputatthetop(wherethenetworkproducesitsprediction)allthewaytothebottom(wheretheexternalinputisfed).Oncethesegradientshavebeencomputed,itisstraightforwardtocomputethegradientswithrespecttotheweightsofeachmodule.Manyapplicationsofdeeplearningusefeedforwardneuralnet-workarchitectures(Fig.1),whichlearntomapafixed-sizeinput(forexample,animage)toafixed-sizeoutput(forexample,aprob-abilityforeachofseveralcategories).Togofromonelayertothenext,asetofunitscomputeaweightedsumoftheirinputsfromthepreviouslayerandpasstheresultthroughanon-linearfunction.Atpresent,themostpopularnon-linearfunctionistherectifiedlinearunit(ReLU),whichissimplythehalf-waverectifierf(z)=max(z,0).Inpastdecades,neuralnetsusedsmoothernon-linearities,suchastanh(z)or1/(1+exp(−z)),buttheReLUtypicallylearnsmuchfasterinnetworkswithmanylayers,allowingtrainingofadeepsupervisednetworkwithoutunsupervisedpre-training28.Unitsthatarenotintheinputoroutputlayerareconventionallycalledhiddenunits.Thehiddenlayerscanbeseenasdistortingtheinputinanon-linearwaysothatcategoriesbecomelinearlyseparablebythelastlayer(Fig. 1).Inthelate1990s,neuralnetsandbackpropagationwerelargelyforsakenbythemachine-learningcommunityandignoredbythecomputer-visionandspeech-recognitioncommunities.Itwaswidelythoughtthatlearninguseful,multistage,featureextractorswithlit-tlepriorknowledgewasinfeasible.Inparticular,itwascommonlythoughtthatsimplegradientdescentwouldgettrappedinpoorlocalminima—weightconfigurationsforwhichnosmallchangewouldreducetheaverageerror.Inpractice,poorlocalminimaarerarelyaproblemwithlargenet-works.Regardlessoftheinitialconditions,thesystemnearlyalwaysreachessolutionsofverysimilarquality.Recenttheoreticalandempiricalresultsstronglysuggestthatlocalminimaarenotaseriousissueingeneral.Instead,thelandscapeispackedwithacombinato-riallylargenumberofsaddlepointswherethegradientiszero,andthesurfacecurvesupinmostdimensionsandcurvesdownintheFigure2|Insideaconvolutionalnetwork.Theoutputs(notthefilters)ofeachlayer(horizontally)ofatypicalconvolutionalnetworkarchitectureappliedtotheimageofaSamoyeddog(bottomleft;andRGB(red,green,blue)inputs,bottomright).Eachrectangularimageisafeaturemapcorrespondingtotheoutputforoneofthelearnedfeatures,detectedateachoftheimagepositions.Informationflowsbottomup,withlower-levelfeaturesactingasorientededgedetectors,andascoreiscomputedforeachimageclassinoutput.ReLU,rectifiedlinearunit.RedGreenBlueSamoyed(16);Papillon(5.7);Pomeranian(2.7);Arcticfox(1.0);Eskimodog(0.6);whitewolf(0.4);Siberianhusky(0.4)ConvolutionsandReLUMaxpoolingMaxpoolingConvolutionsandReLUConvolutionsandReLU438|NATURE|VOL521|28MAY2015REVIEWINSIGHT©2015MacmillanPublishersLimited.Allrightsreservedremainder29,30.Theanalysisseemstoshowthatsaddlepointswithonlyafewdownwardcurvingdirectionsarepresentinverylargenumbers,butalmostallofthemhaveverysimilarvaluesoftheobjec-tivefunction.Hence,itdoesnotmuchmatterwhichofthesesaddlepointsthealgorithmgetsstuckat.Interestindeepfeedforwardnetworkswasrevivedaround2006(refs 31–34)byagroupofresearchersbroughttogetherbytheCana-dianInstituteforAdvancedResearch(CIFAR).Theresearchersintro-ducedunsupervisedlearningproceduresthatcouldcreatelayersoffeaturedetectorswithoutrequiringlabelleddata.Theobjectiveinlearningeachlayeroffeaturedetectorswastobeabletoreconstructormodeltheactivitiesoffeaturedetectors(orrawinputs)inthelayerbelow.By‘pre-training’severallayersofprogressivelymorecomplexfeaturedetectorsusingthisreconstructionobjective,theweightsofadeepnetworkcouldbeinitializedtosensiblevalues.Afinallayerofoutputunitscouldthenbeaddedtothetopofthenetworkandthewholedeepsystemcouldbefine-tunedusingstandardbackpropaga-tion33–35.Thisworkedremarkablywellforrecognizinghandwrittendigitsorfordetectingpedestrians,especiallywhentheamountoflabelleddatawasverylimited36.Thefirstmajorapplicationofthispre-trainingapproachwasinspeechrecognition,anditwasmadepossiblebytheadventoffastgraphicsprocessingunits(GPUs)thatwereconvenienttoprogram37andallowedresearcherstotrainnetworks10or20timesfaster.In2009,theapproachwasusedtomapshorttemporalwindowsofcoef-ficientsextractedfromasoundwavetoasetofprobabilitiesforthevariousfragmentsofspeechthatmightberepresentedbytheframeinthecentreofthewindow.Itachievedrecord-breakingresultsonastandardspeechrecognitionbenchmarkthatusedasmallvocabu-lary38andwasquicklydevelopedtogiverecord-breakingresultsonalargevocabularytask39.By2012,versionsofthedeepnetfrom2009werebeingdevelopedbymanyofthemajorspeechgroups6andwerealreadybeingdeployedinAndroidphones.Forsmallerdatasets,unsupervisedpre-traininghelpstopreventoverfitting40,leadingtosignificantlybettergeneralizationwhenthenumberoflabelledexam-plesissmall,orinatransfersettingwherewehavelotsofexamplesforsome‘source’tasksbutveryfewforsome‘target’tasks.Oncedeeplearninghadbeenrehabilitated,itturnedoutthatthepre-trainingstagewasonlyneededforsmalldatasets.Therewas,however,oneparticulartypeofdeep,feedforwardnet-workthatwasmucheasiertotrainandgeneralizedmuchbetterthannetworkswithfullconnectivitybetweenadjacentlayers.Thiswastheconvolutionalneuralnetwork(ConvNet)41,42.Itachievedmanypracticalsuccessesduringtheperiodwhenneuralnetworkswereoutoffavourandithasrecentlybeenwidelyadoptedbythecomputer-visioncommunity.ConvolutionalneuralnetworksConvNetsaredesignedtoprocessdatathatcomeintheformofmultiplearrays,forexampleacolourimagecomposedofthree2Darrayscontainingpixelintensitiesinthethreecolourchannels.Manydatamodalitiesareintheformofmultiplearrays:1Dforsignalsandsequences,includinglanguage;2Dforimagesoraudiospectrograms;and3Dforvideoorvolumetricimages.TherearefourkeyideasbehindConvNetsthattakeadvantageofthepropertiesofnaturalsignals:localconnections,sharedweights,poolingandtheuseofmanylayers.ThearchitectureofatypicalConvNet(Fig.2)isstructuredasaseriesofstages.Thefirstfewstagesarecomposedoftwotypesoflayers:convolutionallayersandpoolinglayers.Unitsinaconvolu-tionallayerareorganizedinfeaturemaps,withinwhicheachunitisconnectedtolocalpatchesinthefeaturemapsofthepreviouslayerthroughasetofweightscalledafilterbank.Theresultofthislocalweightedsumisthenpassedthroughanon-linearitysuchasaReLU.Allunitsinafeaturemapsharethesamefilterbank.Differ-entfeaturemapsinalayerusedifferentfilterbanks.Thereasonforthisarchitectureistwofold.First,inarraydatasuchasimages,localgroupsofvaluesareoftenhighlycorrelated,formingdistinctivelocalmotifsthatareeasilydetected.Second,thelocalstatisticsofimagesandothersignalsareinvarianttolocation.Inotherwords,ifamotifcanappearinonepartoftheimage,itcouldappearanywhere,hencetheideaofunitsatdifferentlocationssharingthesameweightsanddetectingthesamepatternindifferentpartsof
本文档为【deep learning nature】,请使用软件OFFICE或WPS软件打开。作品中的文字与图均可以修改和编辑, 图片更改请在作品中右键图片并更换,文字修改请直接点击文字进行修改,也可以新增和删除文档中的内容。
该文档来自用户分享,如有侵权行为请发邮件ishare@vip.sina.com联系网站客服,我们会及时删除。
[版权声明] 本站所有资料为用户分享产生,若发现您的权利被侵害,请联系客服邮件isharekefu@iask.cn,我们尽快处理。
本作品所展示的图片、画像、字体、音乐的版权可能需版权方额外授权,请谨慎使用。
网站提供的党政主题相关内容(国旗、国徽、党徽..)目的在于配合国家政策宣传,仅限个人学习分享使用,禁止用于任何广告和商用目的。
下载需要: 已有0 人下载
最新资料
资料动态
专题动态
is_024311
暂无简介~
格式:pdf
大小:1MB
软件:PDF阅读器
页数:
分类:互联网
上传时间:2018-12-29
浏览量: