Enabling AI for Everyone
This is a modal window.
The media could not be loaded, either because the server or network failed or because the format is not supported.
Formal Metadata
Title |
| |
Subtitle |
| |
Alternative Title |
| |
Title of Series | ||
Number of Parts | 90 | |
Author | ||
Contributors | ||
License | CC Attribution 3.0 Unported: You are free to use, adapt and copy, distribute and transmit the work or content in adapted or unchanged form for any legal purpose as long as the work is attributed to the author in the manner specified by the author or licensor. | |
Identifiers | 10.5446/47678 (DOI) | |
Publisher | ||
Release Date | ||
Language |
Content Metadata
Subject Area | ||
Genre | ||
Abstract |
|
2
10
13
14
26
35
37
45
48
54
55
56
63
64
68
73
74
77
78
80
82
85
89
00:00
Computing platformOpen setEmpennageSmartphoneElectric currentBinary fileProcess (computing)Computing platformProjective planeMobile WebPhysical systemMathematicsBuildingPlanningData managementOcean currentChainProduct (business)Power (physics)SmartphoneExplosionComputational intelligenceView (database)PlastikkarteCartesian coordinate systemAlgorithmWaveComputer animation
01:34
Personal digital assistantEndliche ModelltheorieComputing platformHuman migrationCategory of beingProtein foldingMobile WebComputing platformCASE <Informatik>WordComplex (psychology)Artificial neural networkFacebookShared memoryMathematicsCategory of beingTwitterOperator (mathematics)SineMachine learningProduct (business)AlgorithmIP addressDemosceneForm (programming)Software frameworkType theoryDifferent (Kate Ryan album)Wave packetCartesian coordinate systemHuman migrationTotal S.A.Open setComputer animation
03:02
Service (economics)Hill differential equationTotal S.A.Library (computing)Open setLevel (video gaming)Control flowArtificial neural networkComputer architectureOperator (mathematics)Endliche ModelltheorieComputing platformComputational intelligenceProcess (computing)Group actionTotal S.A.ComputerNatural languageSpeech synthesisComputer animation
04:09
ArchitectureInferencePhysical systemAlgorithmWave packetSpeech synthesisComputational intelligenceIntegrated development environmentEndliche ModelltheoriePlug-in (computing)Multiplication signFunctional (mathematics)MathematicsArtificial neural networkOpen setGroup actionInferenceComputing platformMobile appOperator (mathematics)Process (computing)Connected spaceRevision controlNumberLevel (video gaming)Data structureInference engineChainComputerNatural languageHuman migrationInternet service providerDisk read-and-write headCodeAndroid (robot)Uniform boundedness principleStudent's t-testPower (physics)CoalitionVideo gameInstance (computer science)Line (geometry)Water vaporCompilation albumSystem callCategory of beingVirtual machinePerturbation theoryPhysical system
07:26
Software developerInformation privacyVirtual machineService (economics)CalculationServer (computing)2 (number)CASE <Informatik>Mathematical analysisReal-time operating systemPoint cloudMultiplication signData storage deviceSource codeInformation privacyComputer animation
08:54
Multiplication signTransformation (genetics)Server (computing)2 (number)Medical imagingVideoconferencingDigital photographyPercolation theoryChemical equationReal-time operating systemRevision controlPoint cloudComputer animation
10:14
AlgorithmComputing platformDynamical systemFunctional (mathematics)VideoconferencingCartesian coordinate systemPhysical lawCASE <Informatik>Different (Kate Ryan album)Touch typingMatching (graph theory)HorizonComputer animation
12:11
VideoconferencingPattern recognitionDivision (mathematics)BildverbesserungComputer-generated imageryCloningCoprocessorProcess (computing)Translation (relic)Medical imagingInternationalization and localizationSimultaneous localization and mappingDependent and independent variablesPredictabilityTime zoneComputing platformCAN busData managementMedical imagingPhysical lawComputational intelligenceIntegrated development environmentTranslation (relic)Process (computing)Digital photographyFormal languagePlanningTransmissionskoeffizientMacro (computer science)Fisher's exact testOptical disc driveCombinational logicGoodness of fitVideoconferencingReal-time operating systemCategory of beingPattern recognitionMachine visionCartesian coordinate systemAlgorithmoutputOrder (biology)Table (information)WordCore dumpNatural languagePoint cloudComputer animation
14:27
LogicNormed vector spaceDeconvolutionDatabase normalizationParallel portScale (map)Sigmoid functionControl flowStapeldateiConvolutionData modelArtificial neural networkOperator (mathematics)ComputerOperator (mathematics)Variety (linguistics)CircleVector spaceScalar fieldConnected spaceImage processingArtificial neural networkComputational intelligenceMaxima and minimaCartesian coordinate systemBefehlsprozessorGame controllerLogicEndliche ModelltheorieTensorExecution unitMereologyCASE <Informatik>Video gameINTEGRALDifferent (Kate Ryan album)Process (computing)NetzwerkdatenbanksystemComputer animation
15:56
Operator (mathematics)Artificial neural networkData modelSequenceHill differential equationSparse matrixGeometric quantizationConstructor (object-oriented programming)Computational complexity theoryData storage deviceOperator (mathematics)Computational intelligenceEnergy conversion efficiencyEndliche ModelltheorieComputing platformMereologyMusical ensembleSound effectCompilation albumMathematical optimizationCountingSequenceArtificial neural networkBand matrixWeightBitSparse matrixDDR SDRAMCoefficientFood energyComputer animation
17:22
Mobile ComputingComputing platformData modelArtificial neural networkHill differential equationConfiguration spaceCone penetration testConfiguration spaceMathematicsGroup actionProcess (computing)InferenceMathematical optimizationResultantFigurate numberType theoryEndliche ModelltheorieArtificial neural networkNetzwerkdatenbanksystemFunction (mathematics)Field (computer science)ConvolutionDataflowWave packetComputing platformStandard deviationMiniDiscComputer animation
18:22
Computing platformMobile ComputingData modelDDR SDRAMInferenceOperator (mathematics)CodeSample (statistics)FAQSoftware developerDataflowTensorHill differential equationAndroid (robot)Integrated development environmentMathematical analysisEndliche ModelltheorieArtificial neural networkSemiconductor memoryOperator (mathematics)InferenceDDR SDRAMComputational intelligencePlug-in (computing)IterationStrategy gameMultiplication signDemosceneNumberAndroid (robot)Revision controlIntegrated development environmentMathematical analysisComputing platformCartesian coordinate systemCodeSampling (statistics)Human migrationFAQSoftware frameworkForm (programming)Graphics tabletWeb pageWater vaporMereologyCategory of beingComputer animation
20:21
Computing platformMobile ComputingAsynchronous Transfer ModeRun time (program lifecycle phase)Data modelOperator (mathematics)IterationMobile appMereologyMoving averageInferenceRadical (chemistry)Plateau's problemPerturbation theoryCompilerPlastikkarteRoboticsOperator (mathematics)Cartesian coordinate systemIterationRevision controlComputing platformMobile WebRight angleMultiplication signAsynchronous Transfer ModeService (economics)WhiteboardComputer animation
21:27
Android (robot)Computer hardwareInterface (computing)Computational intelligenceWhiteboardCASE <Informatik>Power (physics)Artificial neural networkMereologyServer (computing)Series (mathematics)Stack (abstract data type)Computer hardwareSoftware frameworkAndroid (robot)CalculationBefehlsprozessorComputer animation
22:32
Stack (abstract data type)Kernel (computing)Device driverCAN busVideoconferencingArmFirmwareAxiom of choiceSoftwareBefehlsprozessorAndroid (robot)Integrated development environmentSoftware development kitHill differential equationSoftware developerTowerEndliche ModelltheorieComputer animation
23:41
Stack (abstract data type)Kernel (computing)CAN busDevice driverVideoconferencingFirmwareArmAxiom of choiceSoftwareBefehlsprozessorAndroid (robot)Interface (computing)Computer hardwareMobile ComputingComputing platformRun time (program lifecycle phase)Data modelAsynchronous Transfer ModeOperator (mathematics)IterationCodeSample (statistics)FAQSoftware developerDataflowTensorMathematical analysisSparse matrixGeometric quantizationArtificial neural networkSequenceConvolutionControl flowLogicDatabase normalizationSigmoid functionNormed vector spaceParallel portDeconvolutionPattern recognitionDivision (mathematics)BildverbesserungComputer-generated imageryCoprocessorInheritance (object-oriented programming)Digital photographyMedical imagingSimultaneous localization and mappingDependent and independent variablesInformation privacySlide ruleWave packetLine (geometry)Computer virusInformationStatisticsRevision controlMultiplication signBitComputer architecture2 (number)Endliche ModelltheorieVirtual machineProduct (business)CASE <Informatik>Principal idealMeasurementData managementIntegrated development environmentCoprocessorDirected graphMereologyComputer animation
26:10
Computing platformOpen setHill differential equationMereologyShared memoryValue-added networkNumberBitDigital electronicsComputer hardwareComputer architectureOperator (mathematics)ComputerDirection (geometry)Software testingMobile appArtificial neural networkComputing platformService (economics)Computer clusterPlanningQuicksortBus (computing)MeasurementEndliche ModelltheorieElectronic mailing listView (database)VideoconferencingAndroid (robot)Data miningSet (mathematics)BefehlsprozessorOpen setCalculationInferenceMachine learningField (computer science)Computer animation
31:19
Capability Maturity ModelService (economics)Point cloudSoftware developerMachine learningElement (mathematics)INTEGRALAndroid (robot)Endliche ModelltheorieMobile appMultiplicationCASE <Informatik>Cartesian coordinate systemDigital photographyVirtual machinePlanningAbsolute valueArithmetic progressionComputer animation
32:42
Software developerCalculusThresholding (image processing)INTEGRALDifferential (mechanical device)Matrix (mathematics)Artificial neural networkLetterpress printingCartesian coordinate systemMathematicsCausalityInsertion lossFunctional (mathematics)CalculusPartial derivativeStatisticsVector spaceMachine learningPower (physics)Linear algebraComputer animation
34:16
Software developerLinear mapThresholding (image processing)CalculusWorkloadWave packetBit rateParameter (computer programming)State of matterMathematicsWordAugmented realityEndliche ModelltheorieComputer animation
35:22
Software developerThresholding (image processing)ExponentiationMultiplicationCalculusNP-hardLinear mapHill differential equationWorkloadCartesian coordinate systemEndliche ModelltheoriePhase transitionWeb pageMereologyMultiplication signOnline helpVirtual machineElement (mathematics)Computer animation
37:04
Hill differential equationConvex hullDependent and independent variablesFocus (optics)MereologyArchaeological field surveyEndliche ModelltheorieCASE <Informatik>Physical systemNetwork topologyMedical imagingService (economics)Data managementData structureMobile appAlgorithmComputer animation
38:14
Physical systemEndliche ModelltheorieHill differential equationReal numberInformation privacyMetropolitan area networkMultiplication signQuantum stateFunctional (mathematics)Frame problemDigital photographyCore dumpForestPower (physics)State of matterProcess (computing)Cartesian coordinate systemData structureService (economics)Information privacyArtificial neural networkElectronic mailing listEndliche ModelltheorieElement (mathematics)Machine visionPhysical systemComputer animation
39:53
MaizeComputer-generated imageryHill differential equationPattern recognitionDemosceneHacker (term)LaceVideoconferencingInheritance (object-oriented programming)Image resolutionCovering spaceVideo trackingView (database)Digital photographyContrast (vision)Food energyGroup actionCodeImage resolutionGraph coloringMedical imagingWage labourRight angleLine (geometry)CirclePoint (geometry)Category of beingRevision controlWebsiteInheritance (object-oriented programming)DemosceneQR codeComputer animation
41:11
Digital photographyDigital photographyMedical imagingComputer animation
42:14
DemosceneoutputAbelian categoryComputer-generated imageryType theoryPattern recognitionCategory of beingInclusion mapArchitectureDigital filterMedical imagingoutputDemosceneCategory of beingMultiplicationOnline helpGreatest elementCellular automatonDigital photographyTask (computing)TouchscreenAxiom of choiceVideoconferencingSelectivity (electronic)Functional (mathematics)Mobile appAndroid (robot)Computer configurationGodDifferent (Kate Ryan album)Filter <Stochastik>Cartesian coordinate systemPlastikkarteReduction of orderMetropolitan area networkGroup actionComputer animation
44:39
Machine visionDemosceneCodeSample (statistics)outputFrame problemContext awarenessSocial classResultantRaster graphicsDemosceneLogicKeyboard shortcutService (economics)Computational intelligenceoutputVideoconferencingFrame problemPhysical lawLevel (video gaming)Machine visionMoving averageCellular automatonComputer animation
45:50
Level (video gaming)Level (video gaming)Figurate numberInheritance (object-oriented programming)Identity managementComputer animation
46:53
Pattern recognitionDigital photographyOnline helpFrame problemGroup actionHand fanLevel (video gaming)Computer animation
48:13
Sample (statistics)CodeoutputFrame problemDigital photographyBit rateFaktorenanalyseComputer-generated imageryData structurePerspective (visual)Sampling (statistics)CodeFehlererkennungDigital photographyCodeReal numberAndroid (robot)ResultantJava appletObject (grammar)Social classWeb 2.0Process (computing)MassComputer animation
49:41
Semiconductor memoryComputer-generated imageryAbelian categoryLabour Party (Malta)Digital photographySocial classKey (cryptography)Medical imagingCovering spaceCalculationMassMereologyWage labourCategory of beingoutputResultantComputer animation
50:40
Semiconductor memoryAbelian categoryComputer-generated imageryCodeSample (statistics)outputFrame problemResultantInformation privacyDigital photographyWage labourLogicAsynchronous Transfer ModeMultiplication signParameter (computer programming)Thread (computing)Machine visionoutputCodeSampling (statistics)InternetworkingPrime ideal2 (number)MassComputer animation
52:02
Computer-generated imageryInformationAngleCodeSample (statistics)outputCoordinate systemContext awarenessFrame problemImage resolutionDigital photographySimilarity (geometry)Parameter (computer programming)Stress (mechanics)ForceAngleSampling (statistics)CodeMultiplication sign1 (number)Asynchronous Transfer ModeMedical imagingMereologyMassInheritance (object-oriented programming)Image resolutionComputer animation
53:57
Computer-generated imageryImage resolutionInheritance (object-oriented programming)Clique-widthPythagorean tripleAsynchronous Transfer ModeMedical imagingImage resolutionInheritance (object-oriented programming)Traffic reportingResultantMultiplication signMobile appPythagorean tripleMassGroup actionPlastikkartePresentation of a groupShared memoryComputer animation
55:00
Pythagorean tripleMobile appOffice suiteHill differential equationLocal GroupResultantMobile appFunctional (mathematics)Office suiteMobile Web19 (number)Back-face cullingImage resolutionGame theoryProduct (business)Stress (mechanics)Inheritance (object-oriented programming)Multiplication signComputer animation
56:30
Suite (music)Office suiteTotal S.A.WeightProduct (business)Office suiteData storage deviceComputational intelligenceStability theoryMultiplication signPoint cloudPoint (geometry)Computer fileInclusion mapCloud computingProbability density functionFreewareBuildingContext awarenessInstallation artComputer animation
57:39
Computer-generated imageryPresentation of a groupRotationOnline helpKeyboard shortcutMobile appContent (media)Line (geometry)TelecommunicationMetropolitan area networkWritingPresentation of a groupMedical imagingOffice suiteLattice (order)Digital photographyMotion captureComputer animation
58:44
InformationAngleComputer-generated imageryFrame problemHill differential equationImage resolutionInheritance (object-oriented programming)Interface (computing)DisintegrationIntegrated development environmentGroup actionInheritance (object-oriented programming)Medical imagingImage resolutionArithmetic progressionTask (computing)Digital photographyDataflowProcedural programmingINTEGRALMultiplication signComputer fileVirtual machineCartesian coordinate systemIntegrated development environmentBoss CorporationPower (physics)Roundness (object)Machine learningAreaComputer animation
01:00:15
WebsiteEmailMultiplication signCodeQR codeInformationComputer animation
01:01:25
Software developerCartesian coordinate systemCodeAndroid (robot)2 (number)Computer animation
01:02:32
Multiplication signMereologyParameter (computer programming)DemosceneJava appletMedical imagingRaster graphicsDemo (music)Object (grammar)CodeAndroid (robot)Context awarenessFrame problemIntegrated development environmentLogicVirtual machineDigital photographyFunctional (mathematics)Cartesian coordinate systemFigurate numberTouchscreenBitSocial classoutputPersonal area networkShared memoryRevision controlWeightForm (programming)Process (computing)Data storage deviceDivisorSpeech synthesisCore dumpComputer-assisted translationCASE <Informatik>Tap (transformer)Term (mathematics)Real-time operating systemComputer animation
01:10:36
CASE <Informatik>Multiplication signHypermediaData structureDigital photographyBit ratePlug-in (computing)Point (geometry)Shader <Informatik>Closed setComputer animation
01:15:11
WebsiteMusical ensembleComputer animationSource code
01:16:14
Hill differential equationComputing platformOpen setGoogolProduct (business)Series (mathematics)Software developerNP-hardComputer hardwareOvalNormed vector spaceComputer-generated imageryCovering spaceDigital filterCodeSample (statistics)DemosceneoutputContext awarenessFrame problemDivisorData structureSemiconductor memoryCoordinate systemForceTotal S.A.Interface (computing)DisintegrationIntegrated development environmentQR codeSampling (statistics)Group actionPattern recognitionEndliche ModelltheorieSpeech synthesisMachine visionProcess (computing)Multiplication signWebsiteMobile appComputational intelligenceComputer clusterDemosceneGreatest elementBitFormal languagePresentation of a groupMereologyComputer animation
Transcript: English(auto-generated)
00:00
Hello everyone, good afternoon. My name is Jing Yang, a product manager from Huawei. I'm taking charge of the high-ai mobile computing platform planning and AI ecosystem building. Today my topic is enabling AI for everyone. I will focus on the high-ai foundation
00:21
platform. During my presentation, first I will introduce the current mobile AI chain and the challenge. Second, I will introduce Huawei's solution high-ai mobile computing platform and then I will give some examples to show the values that high-ai bring to developers.
00:42
In the last, I will introduce the technology that we used. Okay, let's begin. Everyone knows the breakthrough in computing performance and algorithm innovation and explosive growth of big data drive this wave of AI development. Not only AI is booming in auto driving
01:06
and smart city, but also AI is going to bring a huge innovation to the smartphones and we are fundamentally changed to the smartphone to the next iteration, the intelligent phones.
01:20
As the computing performance is becoming more powerful, 80% of smartphone will have on-device AI capabilities by 2022. You will find more and more applications are used deep learning and machine learning technology like Facebook, Snapchat, TikTok, but however the mobile phone is
01:50
a type of general product form in which the applications are complex and the scenes of AI user case are uncertain. That's a challenge. Another challenge for mobile AI, you know, the AI
02:07
algorithm is changed very fast and continuously and ever increasing new neural network operators and for different applications, scenarios for each developers use a diverse training framework.
02:27
They have the TensorFlow, Caffe, Torch, PyTorch, so you'd better think about the safeguarding of development cost, migration cost and benefit sharing and appropriate IPs for the developers.
02:49
So if you want to design and develop an AI open platform for the users, you should face this challenge. So Huawei provides a total solution, high AI mobile computing platform.
03:09
High AI provides three levels, cloud, device, and cheap open AI platform to bring extra ordinary experience to users and developers, especially for the cheap level high AI foundation.
03:29
In September last year, Huawei launched its Kirin 970 mobile chipset, the industry's first first integrate the dedicated neural network processing named NTO. This breaks through the
03:47
AI computing performance bottleneck on the mobile phone. So high AI foundation provides an AI computing library and its APIs dedicated for the neural network model and neural network
04:03
operators accelerating on the NTO. This is a high AI foundation architecture. To the up layer is a high AI engine to provide some popular AI function APIs like
04:23
speech recognition, computer vision, and natural language processing APIs for some app developers who don't have their own AI algorithm. In the coming next topic, my colleague will give you the detailed introduction for the high AI engine.
04:42
On the high AI foundation, we have three engines, online inference engine, on-device training engine, and the offline inference engine. We will provide a group of accelerating AI APIs to make the, to accelerate the neural network operators and neural network
05:07
models accelerating on the head genius computing system. And we also provide the model compilation, model loading, running, unloading API. These are the model level
05:27
APIs to make the developers quickly convert and deploy their AI models on high AI platform. And we will grandly provide open N and blast APIs to meet the, to meet the
05:47
relaxable requirement for the algorithm innovation. Because for some high level developers, they would like to try some new neural network models and new neural network structures.
06:02
They even use the customised neural network layers. So we will provide the customised layer APIs to meet such flexible requirement to provide a programmable platform to developers.
06:23
And our platform supports enough mainstream neural network operators, including convolution, deconvolution, full connection. The number of operators is up to
06:41
20 in current version. And the number of operators we will support are increasing to meet the demand of the chain that AI algorithm change very fast and continuously. And we will, we provide a very popular existing
07:03
Android studios as plugins and IDEs and abundance of documentation and reference sourcing code and enough technical support engineers to great reduce the
07:21
migration and deploy cost and time. As I mentioned before, there is a big chain of moving cloud AI to the undivided AI. Higher foundation can enable undivided AI and bring the great values to the developers. First is the real time. Users pursue the real
07:49
time response, mainly AI use case that enhance an appearance that cannot afford latency. Second is privacy. Today, a lot of machine learning service have to send your data
08:05
off to the cloud for actual analysis. If the AI calculation can be done locally, that means the user can get service offline and it will be the last risk to users
08:22
of data getting linked and hacked. The third is cost. If you don't need to send your data, send the user data to a server, that means it will save the traffic data. For the developers,
08:44
if the AI calculation can be done locally, it will save pay for servers. So next, I will give some examples to show the values high AI bring to the patient.
09:03
First is Prisma. Maybe most of the people here know Prisma and played it. Prisma is transforming your photos and videos into works of art using the style of famous artists. Prisma actually uses deep learning to implement it. Most of the people have the same experience
09:26
that you need to send your photo to a server for the transforming. It will take thousands of seconds. It's very slow. That's a bad user experience. With high AI enabled,
09:42
let's have a look at the video. The image transforming speed is three times faster than the iPhone X cloud version. See, that's the real time that high AI bring to Prisma. Actually,
10:06
the target can be less than one second. Another story is TikTok. TikTok is a short video application. It's very popular recently in China. Most of the young children, young guys like it
10:27
very much. Everyone can share the 15 second videos publicly and everyone can comment on it. In this case, the dynamic background replacement function of TikTok can be executed on high AI
10:43
platform. By high AI enabled, TikTok's segmentation algorithm can gain the high precession and performance. Let's look at the left video. Without high AI, TikTok is wrong.
11:05
Please watch carefully the edge of the finger and the hand and the leg. Please watch. You will find the precession of the segmentation is not good.
11:23
Okay, with high AI, let's have a look at the right video. You will find the precession of the segmentation is coming on the background.
11:43
Okay, let's look at it at the same time again. We have a big difference of the precession of the segmentation.
12:12
Besides that, high AI can enable many kinds of applications like this table list, the applications related to the short video, live streaming, social platforms, and photography.
12:27
These kind of applications gradually start to use the computer vision algorithm like gesture, feature, posture, recognition, and image recognition, photo classification.
12:44
High AI can make them run smoothly and efficiently on the mobile phone. For the shopping scenarios like Taobao, Alibaba, Amazon, the people search and buy goods
13:00
from the image recognition by through the photo and the videos proceed locally in combination of cloud. For the AI, AI also is becoming more and more popular in recent two years. AI Core and AI Paint only can recognize the plane and the vertical face.
13:26
In order to achieve more realistic and a cool effect, the better environment understanding is required, so high AI can make it possible.
13:47
Besides that, the computing vision scenarios, high AI also can benefit the natural language processing scenarios such as translation, application, input method,
14:03
application. We have successfully cooperated with the Microsoft translator to make the translation offline and get the real-time response, and we can make the input method applications get the more accurate words prediction.
14:29
So, high AI bring such great values to many kinds of applications and the scenarios, what kinds of technologies that high AI used? Next, we are going to introduce the
14:41
technology. You know, the computing of neural networks differs from the scalar computing and logic control general purpose computing on the traditional CPU. It also differs from the vector computing, rendering, and imaging processing on the GPU.
15:09
The computing of neural network references the special operator computing, including the convolution, deconvolution, full connection. Most of the case, it's referenced to the tensor
15:26
computing. So, high AI foundation integrates the dedicated neural network processing unit, MPO, and supports a dedicated AI instruction for neural network model operations that allow
15:47
more efficient parallel execution of more neural network operators within minimal clock circles. The high AI can compile a variety of neural network operations into
16:06
dedicated AI instruction sequences with data and weight rearrangement for optimized performance, and the instruction and the data are combined together to generate
16:23
the offline execution model. Furthermore, during the compilation, the cross layer can be fused together to bridge reduce the bandwidth of the DDR and thus improve the performance.
16:42
High AI support the sparse model acceleration. The MPO can skip the multiply and operation by coefficients of zero, which can greatly improve the computing efficiency and reduce the bandwidth
17:03
while maintaining computing precision. The high-end computing platform supports low bit quantization, effectively reducing the computing bandwidth and storage consumption and improving the energy efficiency. So next, I will introduce the high-end execution
17:28
flow. And showing the finger by using the convolution tool, a trained neural network model is converted into an offline model that can be efficiently executed on the high AI platform
17:43
and output as the binary field. The main purpose of compiling the standard neural network model into an offline model is to optimize the network configuration. After compilation,
18:01
an optimized offline target fair is generated, which is serialized and stored on the disk. As a result, when the inference is performed, the optimized target fair is used. It's very fast and efficient during the processing. During the offline model computing, the offline model
18:30
is loaded from the fair and the data entered by users is copied to the MPO's memory for computing. Your data only needs to be imported from the DDR to the MPO's memory
18:47
once for each inference. To address the challenge from the mobile AI, application scenes are uncertain and continuously innovated neural networks and operators.
19:07
High AI platform uses a strategy of rapid version iteration. We provide the two versions annually. High AI V100 is released last October and High AI V150 is released in this April.
19:27
Compared to two versions, the new High AI V150 supports more framework APIs and supports more Android and TensorFlow Lite. We support operators numbers up to 90 and we will support
19:46
operators up to 156 in the coming new version in September. And we provide the more easy to use tools, graphic IDE and Android studios plug-ins and log out analysis tools and more FAQ and
20:08
developer and new sample code and documents and technical support engineers to greatly reduce the deploy and the migration cost and time for developers. So to summarize, High AI recommend
20:26
the offline mode inference compile doesn't take up time of run time, more efficient and run time. And much from support, High AI support Caffe and TensorFlow Lite. We also support Caffe 2 and
20:44
Onyx in September, the new version. Right now we support 19 operators, rapid iteration for more and support all platform after Kirin 970 mobile chipset with fast device growth and high
21:02
platform usage. Not only applications that High AI can enable to develop, also it can enable to extend more edge intelligence terminals such as service robots, smart home, smart city and
21:25
automotive. In this April, Huawei launched the High-K 970 which is a popular development board for edge AI development. High-K 970 is the third generation of High-K series in Linaro's 9.6
21:48
board. It is a leading AI enabled development board with powerful computing performance and richer hardware interface. High-K 970 supports more popular AI stack with support
22:05
High AI frameworks and Android NN, OpenCL, OpenGL and with support Linux and Android OS. And supporting both the CPU and the GPU AI calculation and MPO based neural network computing
22:23
hardware acceleration, which can greatly help on device AI development. This is a High-K 970 detailed specs. So if you want more information and want to
22:46
buy the High-K 970, you can visit our website and buy through our sellers. I hope that's all. Thank you for your attention. Any questions?
23:08
Okay. So you spoke about an offline model. Can this offline model be improved?
23:29
So you said that the model can be taken offline to your device? Yes. Can this model learn some more in the offline mode? Yeah, actually, we just mentioned this slide. It's under the developing. We also provide the
23:55
online training engines, but in the current version, we don't support that. In our
24:02
next version, we support the on-device training. But the training only works on the GPU. Hello. I have a little question. I'm here. I'm here. Oh, here, here, here. Thank you.
24:27
I have a question about on-device competition, about running offline models on devices. Do you have any information, maybe any statistics about measurement of performance for common machine learning architecture? I mean, for example, maybe we try to run YOLO, the second
24:45
version on usual mobile phone and with neural processing unit of your way, which was made by YOLO. It's interesting for me how efficient performance of common architecture on your
25:03
devices. Actually, that's a good question. I will ask my senior engineer. Hello. My name is Sean. I'm a principal product manager for our tools and also IDE team. So I actually have some data here, but it's for tomorrow, though, so I can review a little bit
25:23
for you to run a ResNet-50 on our device, the P20 here, Pro, which, by the way, has three cameras. Looks great. I can show you some pictures tomorrow. ResNet-50, you can infer about 2,005 pictures per second, yeah? So just a little bit more. On iPhone X, you can do 889,
25:50
so about more than double than what iPhone X can do. But I'll speak more about it tomorrow, so in case you guys want to come tomorrow afternoon, same time here,
26:04
a little bit commercial here. There's still time for other questions, maybe. I see somebody with a... Yeah, we switch to engine part.
26:22
Okay. Sorry, guys, there's still a question. Can you answer it, maybe? Thank you. You said there were 90-plus operators supported. Is there a list in your documentation where you can see which operations are supported? Yeah, a couple more.
26:41
A couple more, okay. And another question, do you also support tools where you can make performance measurements, for example, CPU versus GPU versus NPU to get specific insights on optimising also the architecture for the hardware? Do you also have tools to measure
27:03
on the specific parts of the chipset? Well, mine works. You can have mine.
27:31
The performance use will come later this year, not in the US, but we're not planning to release
27:41
such a performance tool to sort of simultaneously tell you, okay, if you would use this network or any sort of neural network, what's the performance on CPU versus GPU versus NPU on our device? That's not in the plan, but that being said, you can do that yourself.
28:02
You just need to, I guess, run the same thing on an Android that has GPU, and on an Android that just uses CPU versus our NPU. Yeah, my question goes into the direction of, for example, optimising that the NPU is used
28:23
as much as possible, so, for example, I don't know, if I would train a network, I would have video tools to see how much the GPU is utilised, so, on inference, you would also want to, I guess, have most inference run on NPU and then compare that with the GPU utilisation
28:40
and have as little load as possible on the CPU, so, just like I was wondering if there is access to see how, like which parts of the model perform how well on the different things, but I guess you already answered that pretty much. Yeah, but, I mean, to answer the first part of the question, the entire model is running
29:01
on NPU, we don't offload it to CPU to do the, you know, do the non-tensor calculation. As far as remember, that's how it is. Maybe we can move that to offline later on. Yeah, sure. So, Dan, I think we can pass to the second part of the workshop, right?
29:21
And the speaker right now is going to be Vincent, I hope I pronounced it correctly. But, yeah, let's make also a big applause for him and have a nice second part. Okay, good afternoon, everyone. I'm very glad to have this opportunity standing here and sharing some of our work.
29:44
I'm Vincent, I'm an Android developer and also I'm now learning and a little bit familiar with machine learning and deep learning. So, before I introduce this part, please allow me to ask you questions. How many of you guys are familiar with Android development?
30:04
How many of you guys? Okay, okay, almost almost. How many of you guys are familiar with machine learning or deep learning? Okay, just a little, just a little. Okay, thank you, thank you. So, Huawei currently plays a very important role in the field of Android mobile phones.
30:26
So, our device is based on Android, it's totally based on Android. And especially in recent years, we have lots of intelligent phones. So, what is intelligent phones? We have lots of artificial intelligence features on the phone
30:42
and it's really improved user experience, this intelligent phone. And as Dr. Li Fei-Fei said, so democratizing AI is inevitable, so we are also willing to share our technologies to help ease the developing burden of optimizing large artificial intelligence
31:02
and machine learning data set and models for mobile apps. This is for mobile apps. Okay, so what are we going to talk about today? This is my topic. High AI Engine Open Platform. So, I don't have to. Okay, as the keynote and previous speaker said,
31:23
we already know that the solution of high AI includes high AI service, high AI engine and high AI foundation. So, we already know foundation is about how to accelerate your models, make your models run really fast. Here, I want to focus about the engine layer.
31:40
The engine layer is for Android developers, but not familiar with, not much familiar with, familiar with AI technologies like machine learning and deep learning, something that makes you, but you want to build an application with AI elements. So, engine is helping you in this case. It provides minimalist API to let you integrate multiple AI capabilities
32:06
into your apps to make extraordinary experience to users. This is what we want and this is what we provide. Okay, so, okay, just take me as an example.
32:20
I'm an Android developer first. So, I will share you the stories about when I first, I was new to machine learning and how I learned to machine learning, how I create my first AI models, AI elements applications. So, actually, there are some prerequisites and requirements.
32:46
We need to learn, we need to learn at what so many machine learning and deep learning algorithms are and why they work and so how they work and why they perform well or not.
33:01
This is our purpose to learn. So, we need to get that and we can get a more powerful AI application. So, it's maybe in the course, network course, maybe Coursera, something like that. I just learned from that or MIT, so lectures, something like that.
33:23
It takes me about approximately six months to finish all the lessons. And then, but this is also depends on some mathematics because you need to know statistics which includes the concept of mean, variance, you also need to know, this is the hardest, I think,
33:42
for me, it's the hardest one, this is the linear algebra. You need to know, you get it. Linear algebra includes the concept of vector matrix. It's a little complicated. And, of course, you need to know calculus. You need to know the differentiation, integration, and partial derivative.
34:02
If you know partial derivative, you can know why the cost function, the loss function works well. So, for me, it takes approximately, what, this? 15 months, maybe, to finish this course. And after this, I'm ready to start Go to do some AI real-world work.
34:25
So, then I just start Go. I, but unfortunately, what is boring very much is doing AI models, creating AI models is really boring and tired. Because the 90% of my work is collecting data or data cleaning.
34:43
This is really terrible. But, anyway, I still do augmentations for the data and do some, just the training, some training, changing some learning rate or something. I fine-tuned the model. I created my first AI model.
35:00
So, hooray! It's in my model. And, but then I made another scenario. Unfortunately, the models I previously trained does not work. So, you know that just now, the AI model does not work for all the scenarios. It's just one scenario. So, the workload of creating another model is double.
35:24
And as do more and more scenarios, the workload is multiplied by N. This is a disaster. So, anyway, at last, even the development phase is completed.
35:41
How to upgrade, how to upgrade models, push our models to users? Because we, the purpose of us is to selling or something to share our application. But the model is in application, you know, right? But it's difficult. Because some people don't upgrade your applications.
36:01
It's difficult to do that part. So, all of this is terrible. Let's just turn to the next page. Okay. I think, unfortunately, I learned machine learning through earlier. Because at that time, we don't have the higher engine.
36:21
As we just calculated, I for myself, I just took about 18 months to 24 months to create a real models. But from the with the help of higher engine, we can just do like this. This last hour, last two hours, you can you can create an application with AI elements.
36:47
So, this is cool. And maybe if we have time, the last part of my presentation, I will just coding here and let you show how to integrate this something API.
37:01
Okay? The last part. And here is what I want to talk about. This is why we can achieve the goal. So, as you know, our higher engine is an engine-based service. As you guys are all familiar with engines, we provided some AR SDK for developers.
37:27
So, this AR is very lightweight. It's responsible for bounding service into the engine layer. So, you can not worry about the algorithms itself, but only focus on what you want to
37:42
do. Like just facial detection, the image classification, something like that. And you may pay attention to this. The model managers here, why we provide this part? Because it's hard to upgrade and maintain your models. Here the models we have provided is either system precise or downloaded on demand.
38:08
So, it's not in your app. This is what we do. And of course, the previous one is the frame, the structure of the there's a brief note
38:22
that shows everything you need to know. So, for application developer who wants to do something AI-related work, the only things you need to integrate is one AR. It's called oh, I don't want to show this. It's called high vision.
38:41
So, one package for all. You can use this one SDK and you can get all the things you want. So, it offers meaningful list API. This is the reason why we are pretty sure you can integrate the AI elements with the last time. And it's better, you know, that the whole the entire processing is on device.
39:04
So, you don't need network. That means it's safe, can protect your privacy. Your photos don't upload in the service. Who knows? They use our photos to do something. Yes? This is so safe. And you can use it anytime, anywhere.
39:22
And we also now do the models as system precise or downloaded on demand. We already mentioned this. And previous speaker said they have high AI function. So, the function is about how to accelerate your models. If you use API, you also share both the function acceleration and you can use the minimalist
39:48
API. So, you all get them. And okay. Here comes the engine version 1.0. So, the orange color is we already published the API.
40:04
You can use it right now. Later I will show you some developer website. You can just run your code here. And you can see we provided bar code, QR code detector, image, category label, scene detection,
40:21
document detection, correction. We also provided this cool ability. Okay. This is called a static score which can rate your photos. Can judge the beauty of your photos. I think it's amazing. And we also provided a rich facial detection API.
40:42
And for image enhancement, we give the image super resolution which really improve the quality of your photo. And for segmentation, we both provided the portrait and the image schematic. And the third line, the bottom line, this API is not opened in China yet.
41:05
So, I think we will meet you soon. Okay? And you may be wondering there are so many APIs. And what can we do to use so many APIs?
41:21
Do you have these questions? The same question. Okay. Okay. Please look at photos I took. The photos I took in Beijing, China, is any one of you has ever been to China? Okay. There was one. Okay. Okay. This picture took in China, Beijing.
41:41
And the weather that day was awesome. So, I took it. But you guys see the photos is not cool enough. Can't reflect my mood. So, luckily, my whole phone can make the photos like this. I think this is more movie style and it's cool enough.
42:02
But it's not secret. We just do some image enhancement on the blue sky. Yes. But we need to know first it's about the blue sky. This is what we do. You may not forget the APIs we provided include the scene detection API.
42:22
So, with the help of scene detection API, not only the blue sky, you can always get classified images into multiple categories. So, it can cruise something. And if you input images below the bottom image, you will get the scene of fireworks.
42:42
So, this is what I do. Let's see another. Does anyone can recognize the UI is from WhatsApp? Anyone? No?
43:01
Okay. I just saw this is an app called Prisma. It can transfer you photos and videos into work of art using the style of famous artist. And in China, lots of people like using this kind of applications who can add filters and make your photos different styles.
43:23
Actually, in China, it's popular. In Europe, you guys don't use this kind of apps? No? Oh, my God. Okay. So, the problem of that kind of application is that they provided more and more filters
43:41
to make different styles. So, the more filters means stronger function or selection phobia. This is what I want to discuss about. Because here, this UI in Android phone, they only recommended three at most four in one screen.
44:03
So, if they provided thousands of options, maybe I won't give a task to you. So, choose the best one you like from the thousands options. So, maybe see you two days later. This is a disaster.
44:21
So, I'm glad to say that this app already use our API of scene detection. So, they recommended users top two. Top two of the choice from thousands of options.
44:41
Okay. Let's see how to use scene detection API. As we are an Android-based service, so we provided a method in it to let you bind to service to the engine service. This is for binding service. You already know it.
45:01
And this is the first step. You can't use the other API, any other API unless you successfully do it. And then we need to prepare our input. As we do computer vision, so our input is almost the bitmap. And you can set your bitmap into the class frame.
45:24
And later we will provide another class video. So, it will maintain your video input. And then we do the AI logic. You know, it's a scene detector. We just create a scene detector and then detect. That's all. So, this is why we are really simple.
45:43
And then you get the result. That's all. Here's another situation common we often use. We usually match these situations.
46:01
Please pay attention to the left one. It shows the children. They are all makeup, dancing on stage. And in this picture, they all look the same. At least in these photos. And the face size is really small. So, maybe even the parents of the one girl
46:25
cannot, at a glance, figure out which one is my daughter, I think. So, let's see how our ability can deal with these situations. And the left one is that you may pay attention to the behind
46:41
the boy with blue t-shirt whose face is half obscure. You know, this, let's see how we can do it. Okay? I'm glad to say we already have some work about that. It's the face detection. So, face detection can recognize something if you're a side face
47:04
or this one, this boy, the face is blurred. And side face something, we can all recognize this. And for the previous situations, I want to show you the, this is the UI of Huawei Gallery.
47:22
And I want to pay attention to the subnails of the yellow frame that shows the makeup girls dance on the stage. And this subnail is about the boy with blue t-shirt. And we also, okay.
47:43
We also saw the yellow, the orange frame. So, this is the girl. This is the girl. We got, and this is the boy. So, you can find Huawei Gallery already collected the 482 photos of the girl and already 179 photos of the boy.
48:02
So, with the help of API face detection, this is our engine's ability and Huawei Gallery found our children. So, let's see the sample code of our face detection. You may notice that this is a really similar one.
48:22
It's very similar. This is what I do. This is what I design. So, they're similar. So, once you get one of the API, you can feel free to use any other. And okay, this is the cool ability asset. This aesthetic score engine, they can rate a photo.
48:44
I will not explain too much things about this because later I will do a real coding here. So, just about this. Okay, the similar code.
49:00
And as the design of the APIs is really simple, here I want to show you some detail about the design of the API. You can pay attention that the tax return offers the JSON object. And it also offers a convert result method to get Java class.
49:23
So, this is exactly what we do. For web developers, you may feel comfortable when you see the JSON object. And for developers, native Android developers, you may prefer the Java class. This is what we design.
49:41
And okay, here comes my favorite part. Take a few photos you see your albums. Maybe you can see. If you do not have a holiday devices, maybe you can see this kind of messed up photos. We can't find last summer our kids' Disneyland trip photos.
50:01
And we also can't find last year my birthday photos. So, it's really hard to find our photos. And luckily, we already provided API about classified labor.
50:24
Image category labor. Yes, yes. And let's see. This is a lot of photos and research. And I input the input what? I think input October 26th birthday. This is October 26th birthday.
50:41
Sorry, it's Chinese. But then you get all the result of that birthday. This is what we do. You can quickly search your photos. Because we provided API that can labor you out of your photos. And you may notice that it's all on device. So, you don't care about the privacy problems.
51:07
And here comes the another sample code. So, as this is really simple and I want to tell you another detail about the API. You may pay attention about the second parameters of the detect method.
51:24
I just input none here. And you can choose to put in a subclass of the vision callback. It can make this lens to asynchronize the mode. And of course, as the detect method may cost the time.
51:41
Even they are really fast. But, you know, AI logic really costs time. It costs time. So, we prefer that you do the detect method in the work thread. So, if you do not have any special requirements, the synchronized mode is also enough.
52:04
And here we all know these situations. Just here some people take the photos of maybe you are now right in the middle like something that you can get this kind of similar photos.
52:20
Actually, I should tell this about things at the first. So, they can make you don't get this kind of photos. We have the solution. So, the solution is about several APIs together can deal with this problem.
52:42
And the first one is document detection and correction. Okay. So, you see the photos here. This is like this. And we can use document detection, detect where is the document. And we can correct the angle of the documents like this. And another API to solve this situation is oh, sorry.
53:03
This is the document sample. So, you may ask, this is all similar. Why I insist showing the similar ones again and again? Does anyone have an idea why I do this?
53:20
Anyone? Yes. I just think you don't know. The reason why I do this is I want you to feel tired when you see the similar ones. Because from the from what when I speak and to this, you only see five times of the code of the sample code.
53:41
And now maybe you tired. It improves what? It improves the API. I designed this really simple. You already master it. You don't want to see it again. This is what I do. Okay. Another ability to solve the massive reports is that image super resolution.
54:07
We provide enhanced image clarity and we support 1X, 3X, the mode. 3X is both for hat and Y. So, it's nine times magnification. And let's see. Okay.
54:21
Let's see the result. The one click transition of PPT. So, it's also the UI of the gallery. And we do it together with the app WPS. Let's see. First, we selected mass report. And then we oh.
54:42
And then we watch. This is really quick. You already got the reports. Let's just do it again. I didn't follow the speed. So, select and share. And create presentation.
55:03
And just you get the results. Okay. This is what we do. And this amazing this amazing function is done by the app WPS using our
55:22
those documentation correction API and the super resolution API. And I'm glad to say developers from WPS also came here today. And willing to share the development stories and the experience to us. And after their sharing, I will do the coding things.
55:44
So, time for WPS. Let's welcome. Okay? I'm Lina from Kingsoft office in Beijing.
56:00
It's great pleasure to meet you here. First of all, please allow me to introduce our company and our product. Kingsoft was founded in 1988. And listed in Hong Kong in 2007. It has four subsidiary companies.
56:20
Kingsoft office, Cheetah Mobile, Kingsoft Cloud, and CSUN Games. Our offices are located in China and United States. Our product WPS office is one of the most popular office suite with WPS office.
56:45
You can read, create, edit all kinds of files. Include Word, Excel, PowerPoint, and PDF. You can share and access them anywhere at any time. And you can use WPS office free on all your devices.
57:04
Your Android phone, your iPhone, your tablet, and your computer. All your documents could be saved and synchronized across all your devices with our cloud service. There are over 1.25 billion installs and more than 250 million monthly active users.
57:28
WPS dominates China market and we are expanding our global footprint and increasing international global awareness.
57:41
As what we are talking about here today is that how AI is used in WPS office and how AI is redefining mobile applications. Just like today, you come here to join the workshop and you want to share the presentation with your team, what we'll do. Now you can finish it easily with the help of HAI and WPS office.
58:05
HAI and WPS office can help you take notes such as meeting minutes and lecture notes without handwriting and keyboarding. What you need to do is to take out your mobile, take photos,
58:21
select photos which you want to add in your presentation. HAI and WPS office will recognize the main content on each of your photos and capture clean images and transform them into a presentation document. Then you can send it to anyone you want.
58:43
Yes, AI makes it easy. As Vincent mentioned before, we used three HAI APIs to create this feature document detection, document correction, and image super resolution.
59:03
You couldn't imagine how fast the integration progress is. It took us only half a day to deal with the HAI APIs to make the main procedure work. We spent several days and took a lot of photos to test
59:21
and optimize the working flow to improve the user experience. After we created this feature, users opened more files and spent more time in WPS. We think that AI and machine learning will explore and unlock many interesting areas.
59:46
We must keep learning and get ready for the upcoming challenges. That is why we cooperate with HAI AI. During our cooperation, HAI AI helps us a lot with its professional AI skills.
01:00:00
It builds a favorable ecological environment for developers and provides all-round technical support. High AI will bring the power of AI and machine learning into your application. Now, we keep working on creating more intelligent features with high AI.
01:00:23
We hope that you could join us, let us work together to get AI to help everyone. Here is my mail. For any cooperation, please feel free to contact me. Thanks for your time.
01:00:44
Okay, thanks for sharing and I'm pretty sure Huawei and WPS will together do something more fantastic futures to improve user experience. This is the
01:01:01
support info of the high-end engine. So you can, this is the website, what it looks like, and this is the website, so you can you can use the API I mentioned earlier. It's right now from this API. Okay, after this,
01:01:20
have I finished? Okay, and you also can scan this QR code or email us for support. And I here sincerely want the Android developers,
01:01:41
you all join us, use the API and we both way together create some fantastic application. This is what I want to do. Sincerely, join us. Okay. Now let's do some coding.
01:02:16
I'll launch my Android Studio. Just a little second.
01:02:28
So duplicate, okay. Okay.
01:02:46
Here I take an example of, as you all are Android application developer and some of you familiar with machine learning, so I just took the demo from TensorFlow Lite and
01:03:02
I'll show you how you can integrate the API I mentioned to the TensorFlow Lite demo. Okay, so here is the,
01:03:22
as you see here is the the code, the demo code from TensorFlow Lite who has the ability of quickly classifying images. So it's to image classification. Let's see here.
01:03:41
He has a camera to what was this? Okay, camera to basic fragment and the main logic of this demo is classify image. So this is what what he do and I'll tell you how to use the
01:04:00
aesthetic scroll here to how to use this and we have the IDE and we have a plug-in and to the IDE. So the tomorrow and at the same time our college will introduce how to install and how the function of the IDE. So I just I have already installed it. It's a plug-in of
01:04:24
Android Studio and this is here. I just launched it. So here is out of our API. You can see the aesthetic scroll at the top and what I do is here. So you can see the code here.
01:04:45
TensorFlow classified frames and I don't want to classify frame. I just want to know. I just want to give my photos to judge my photos. So let's
01:05:02
comment it and and L here, I will click and jog the aesthetic scroll here. So you'll see all the code here. This is the entire simple code you have already seen many times and we can see this.
01:05:27
The first one is the code about initial parts and because here is the classified frame. We don't want to end it every time when I met a new frame. So I just cut it
01:05:44
here. So we just cut it and I'll say so where should I where should I paste it, you guess? It's I want
01:06:00
Okay maybe activity created. At the first time activity created, I just paste it here and okay, this is wrong so it
01:06:21
okay, so in it the first parameter is context. Let's see get context here and let's see the Let's see the classified frame. So this here we get the asset score detector and we
01:06:44
prepare our input to the frame and as TensorFlow Lite's demo already have a bitmap. So here we don't need to prepare ourselves. We just set a bitmap into our frame and
01:07:01
then we want to get the score. This is JSON object. This is aesthetic score Java class and we get the score and we see that the lesser part of it is destroyed. As we I will have a this demo is a real-time demo I will do the frame again and again. So there's no need to destroy the code.
01:07:25
Okay, so let's also cut it off and let's figure it out where I can put the destroy method. I think it's okay, here is on destroy. So let's paste it here and
01:07:49
let's back to the code here. Okay, here we get the score and
01:08:01
we need to the text to show Appending the score of your photo is
01:08:22
append score Okay, here's now the score. I just need to
01:08:42
this Okay, where is my phone? Okay, let's see what it will happen. Oh, you guys can't see the you guys can't see the phone. So
01:09:08
I just install it and then I'll show you how it's
01:09:32
okay, we connected our phone and We just built the application and see what happened
01:09:42
Okay, there's an error. So where is it? It also needs context So we can hear context here and I'll try again
01:10:03
So, let's see what happened is there still has an error. I'm a little bit nervous now okay, I think I'm installing the applications and
01:10:24
Here it comes I'll show you some about so now it's no need to see this screen I want you to see the See what's going on the application
01:10:43
How to access this Oh
01:11:06
Any of you want to come here and try because this can okay welcome
01:11:44
So, this is the score and you can access it Okay, fine So, I'm going to show you guys
01:12:01
Which is not Oh This is good I'm going to use this Here, here, here Come here, we can I got some chocolate and candy Okay, so let's
01:12:21
Maybe you can You can get this Can you light? Okay, light Okay Okay, this is good
01:12:41
That is fantastic This is the higher? 75 We can use, maybe you can use that 78 No, it can't get much better Maybe you can use this light Everyone say that Anyone say that?
01:13:01
Anyone see that? Anyone see that? No, that goes down Because we The structure is not We have the Shaders here So, this ability is to judge How your photos are taken The better photos are taken The higher score will be
01:13:20
So, it's difficult to hear Stand here If anyone, maybe we can Close Probably here Oh no, this is really Bad photos We only got less than 30 marks But, actually I got, how many?
01:13:44
Thank you And tomorrow, tomorrow at the same time
01:14:04
2 pm Our college will Tell you that The ID is named By So, you can You can come here
01:14:21
Oh, okay Ascentation is called What does it mean? It's called a mean Oh, okay This is
01:14:41
The point People pass, snow light Or blur Or skimmer Ah, okay, thanks That's clear, can you tell us please
01:15:01
What is the IDEA plugin What do you use? IDEA plugin Can I end it? We have the developer website You may think Here is the developer
01:15:46
And you just choose that name You can What is the name? D-E-V-C-D-C-O D-E-P-O This is just a band You can download it as a website
01:16:03
Oh, sorry I think
01:16:42
Okay, okay Here, this is the QR code And This The bottom is the website Any questions?
01:17:01
Any other questions? We still have a little bit of time Hello, thank you for your presentation I Just browsed the website
01:17:20
That you showed us here And I can see that here are Only Samples of Computer vision Do you have any samples of Speech recognition or something like that Speech recognition Something like that? Actually, we have
01:17:41
It's not open Not yet Okay, so Okay, so my question is Do you have any Models for Sound processing? Any models Because I can see that Here are a lot of
01:18:00
Models for Visual processing For detecting Some scenes or something like that Do you have any models or samples How to create that models For speech Or generally for Sound processing Oh, actually
01:18:21
How to Of course How to render No, because you have prepared some samples How to do face detection For example Face detection There are some samples Do you have any samples for Processing the sound?
01:18:41
Or no You actually Found the sound Yes, yes The computer vision part This sound We call it ASR The ASR part We can't already
01:19:02
Publish in Europe Because the language Is different Because we It's not different We have time to English something like that They will meet you soon I'm not here
01:19:21
If I have some Huawei Phone, I can do My own models To do speech recognition And use NPM That function, the previous speaker You can use the ASR Okay, thank you