We're sorry but this page doesn't work properly without JavaScript enabled. Please enable it to continue.
Feedback

FreeBSD for High Density Servers

00:00

Formal Metadata

Title
FreeBSD for High Density Servers
Title of Series
Number of Parts
41
Author
License
CC Attribution - ShareAlike 3.0 Unported:
You are free to use, adapt and copy, distribute and transmit the work or content in adapted or unchanged form for any legal and non-commercial purpose as long as the work is attributed to the author in the manner specified by the author or licensor and the work or content is shared also in adapted form only under the conditions of this
Identifiers
Publisher
Release Date
Language

Content Metadata

Subject Area
Genre
Abstract
To promote FreeBSD to High Performance Computing or High Density Servers under such circumstances, it is considered to be very important to share the information about how to install, how to setup, how to manage, how to patch and how to fix to work FreeBSD correctly with those machines. In this session, I am going to talk about how to install FreeBSD to MicroModularServer and how to manage and control those servers. To install FreeBSD to High Density Servers including NEC MicroModularServer or HP Moonshot, you need another skill compared to install to common PCs and rack mount servers. This kind of servers (low energy consumption, low computing power and high space efficient) are good for too many edge servers/web servers at limited rack space, for example, as an alternative system for Blade servers or many cores servers like Sun Fire T1000/T2000.
16
Thumbnail
39:54
Server (computing)Modul <Datentyp>IntelInformation technology consultingSineWhiteboardLink (knot theory)Operator (mathematics)Physical systemType theoryGame theoryAxiom of choiceSupercomputerEnterprise architectureSoftwareInformation technology consultingVariety (linguistics)Line (geometry)Computer clusterMachine codeProbability density functionServer (computing)JSON
Observational studyInformation privacyStatisticsLevel (video gaming)SupercomputerComputer animation
Physical systemMetropolitan area networkArmBlu-ray DiscSupercomputerOnlinecommunityNeuroinformatikInternet service providerGroup actionPhysical systemCoprocessorRoundness (object)Graphics processing unitMereologyObject (grammar)Computer scienceMedical imagingOperating systemStandard deviationCartesian coordinate systemWordAutonomous System (Internet)Product (business)BitBenchmarkForcing (mathematics)Server (computing)Absolute valueElectronic mailing listProjective planeFreewareRankingComplete metric spaceKernel (computing)Online helpOperator (mathematics)Power (physics)BefehlsprozessorWater vaporDiagram
System programmingPolar coordinate systemServer (computing)Level (video gaming)Red HatComputer hardwareRevision controlLine (geometry)Metropolitan area networkExtreme programmingInformationModul <Datentyp>Total S.A.Software testingGamma functionCloud computingExpressionInformationData analysisSeries (mathematics)Server (computing)Virtual machineSoftwareFreewareProxy serverNeuroinformatikFormal verificationInternetworkingResultantPhysical systemEnterprise architectureEndliche ModelltheorieSoftware testingFamilyProjective planeSocial classConnected spaceScalabilityWeb 2.0Software developerLevel (video gaming)Internet service providerCore dumpComputer hardwareCoprocessorComputer scientistSupercomputerTelecommunicationComputer simulationPoint (geometry)Product (business)Food energyFile formatOperator (mathematics)ForestOpen setStability theoryRight angleToken ringSampling (statistics)ExistenceQuicksortBarrelled spaceException handlingInformation technology consultingProbability density functionNetwork topologyWebsiteBus (computing)Orientation (vector space)Atomic numberJSON
Server (computing)Single-precision floating-point formatModul <Datentyp>Military operationIntegrated development environmentPower (physics)Module (mathematics)Computer networkHand fanType theoryHard disk driveOperator (mathematics)Integrated development environmentPower (physics)Connectivity (graph theory)Module (mathematics)InformationData managementData structureServer (computing)SoftwareDegree (graph theory)Proxy serverArtificial neural networkPoint (geometry)Address spaceVideo game consoleFood energyPerturbation theoryFlow separationProcess (computing)Pulse (signal processing)Theory of relativityKeyboard shortcutMedical imagingPhysical systemWeightCircleHand fanForestError messageJSON
Module (mathematics)Server (computing)Power (physics)Hand fanType theoryModul <Datentyp>Computer networkModule (mathematics)Server (computing)Set (mathematics)IP addressPixelPhysical systemLevel (video gaming)Cartesian coordinate systemParameter (computer programming)JSON
Module (mathematics)PCI ExpressBefehlsprozessorNP-hardBlock (periodic table)Capability Maturity ModelHard disk driveLimit (category theory)Power (physics)Digital photographyOperator (mathematics)SoftwareVideoconferencingOrientation (vector space)Medical imagingElectric generator
BefehlsprozessorModul <Datentyp>Real numberServer (computing)Uniformer RaumPower (physics)NumberServer (computing)PhysicalismIntegrated development environmentPower (physics)Module (mathematics)State of matterCellular automatonFunctional (mathematics)Receiver operating characteristicCausalityMereologyJSON
Computer networkModule (mathematics)BootingInformationArtificial neural networkVideo game consoleModule (mathematics)SoftwareServer (computing)Power (physics)Fraunhofer-Institut für Physikalische MeßtechnikCellular automatonData managementMereologyAsynchronous Transfer ModeNatural languageCNNSet (mathematics)RankingMeasurementLink (knot theory)Endliche ModelltheoriePoint (geometry)Field (computer science)
Server (computing)Module (mathematics)CoprocessorMaxima and minimaPerturbation theoryModule (mathematics)Video game consoleSerial portServer (computing)SoftwareElectric generatorData centerLine (geometry)Hard disk driveLink (knot theory)NeuroinformatikEmailCASE <Informatik>Semiconductor memoryMetropolitan area networkBuildingDivisorCloningProcess (computing)Computer programmingVideo gameAddress spaceLocation-based serviceTupleMultiplication signNatural languageQuicksort
Gamma functionModule (mathematics)Newton's law of universal gravitationLine (geometry)SoftwareServer (computing)Artificial neural networkDifferent (Kate Ryan album)Radical (chemistry)Module (mathematics)Data managementInterface (computing)Capability Maturity ModelMathematicsDiagramSet (mathematics)Asynchronous Transfer ModeParameter (computer programming)Flow separationProgram flowchart
Electronic program guideProduct (business)Video game consoleSerial portLoginComputer networkModule (mathematics)Information systemsGotcha <Informatik>Bit rateNeuroinformatikLatent heatServer (computing)Module (mathematics)InformationBenutzerhandbuchAddress spaceDifferent (Kate Ryan album)PasswordSoftwareVideo game consoleType theoryProduct (business)Virtual machineBit rateMassMetropolitan area networkBitRight angleBulletin board systemSet (mathematics)Factory (trading post)File formatComputer programmingOperator (mathematics)Computer animation
State of matterInformationConnected spaceMathematical singularityPasswordLogarithmLoginVideo game consolePC CardAutomorphismFormal grammarSimulationLocal area networkPhysical systemRevision controlData typeSoftwareBefehlsprozessorFingerprintSerial portGateway (telecommunications)MereologyPartition (number theory)Computer networkWhiteboardPoint (geometry)Internet forumSuite (music)Module (mathematics)Maxima and minimaTransmitterPermanentData managementHoare logicInclusion mapBroadcasting (networking)Metropolitan area networkKernel (computing)Server (computing)Dynamic Host Configuration ProtocolFile Transfer ProtocolBootingTotal S.A.StatisticsMassHypermediaComputer hardwareBinary fileDrop (liquid)SineInformation systemsVarianceSystem administratorPower (physics)Software2 (number)Radical (chemistry)CuboidRight angleModule (mathematics)Information securityInformationIP addressComputer fileSlide ruleBit rateServer (computing)BitAddress spaceMessage passingLoop (music)Directory serviceLine (geometry)MathematicsData managementType theoryPoint (geometry)Content (media)BootingInstallation artIntegrated development environmentComputer configurationCapability Maturity ModelSpacetimeConfiguration spaceStructural loadGodColor confinementKernel (computing)ChainMedical imagingVideo game consoleNumberDifferent (Kate Ryan album)Set (mathematics)Physical systemGroup actionPower (physics)Binary codeElectronic mailing listMiniDiscWritingUtility softwareArtificial neural networkFreewareQuicksortAreaOpen setEndliche ModelltheorieSimilarity (geometry)Logistic distributionPortletText editorNetwork topologySubject indexingSystem callInterface (computing)Data structureDrop (liquid)Source codeRule of inferenceCASE <Informatik>Ferry CorstenSignal processingCellular automatonNeuroinformatikComponent-based software engineeringDefault (computer science)PixelVideo gameExecution unitSerial portVariety (linguistics)Information privacyJSON
Revision controlSoftware testingData managementGame controllerWhiteboardFirmwareInformationServer (computing)Utility softwareValue-added networkMetropolitan area networkCartesian coordinate systemTouchscreenSet (mathematics)Functional (mathematics)Radical (chemistry)Module (mathematics)Type theoryBootingConfiguration spaceOrder (biology)Selectivity (electronic)MathematicsSequence4 (number)WeightMenu (computing)Computer animation
Dynamic Host Configuration ProtocolClient (computing)Gateway (telecommunications)IntelBootingSingle-precision floating-point formatMultiplicationComputer configurationKernel (computing)Video game consoleStandard deviationEmulatorPhysical systemComputer fileData typePersonal area networkArmLogarithmRadical (chemistry)Kernel (computing)BootingType theoryIP addressAxiom of choiceStructural loadTerm (mathematics)SequenceTouchscreen
Cluster samplingBitServer (computing)Modul <Datentyp>Port scannerInstallation artModule (mathematics)Operator (mathematics)TheoryWave packetVirtual machineNeuroinformatikTheory of relativityGene clusterServer (computing)Food energySpacetimeData structureMoving averageIterationBitComputer animationJSONUML
Interface (computing)Module (mathematics)Data managementSoftwareBitCASE <Informatik>Endliche ModelltheorieDefault (computer science)Prisoner's dilemmaTube (container)Object (grammar)
System administratorLoginState of matterInclusion mapInformationConnected spacePasswordStatisticsValue-added networkSineMathematical singularityLogical constantInformation systemsServer (computing)SoftwareType theoryMathematicsModule (mathematics)LoginException handlingWeb 2.0Common Language InfrastructureGoodness of fitCore dumpComputer hardwareVideo game consoleRevision controlBootingSingle-precision floating-point formatBefehlsprozessorData managementServer (computing)Configuration spaceSerial portLatent heatVirtual machinePoint cloudLimit (category theory)Integrated development environmentVideoconferencingPoint (geometry)Functional (mathematics)Artificial neural networkVirtual LANCartesian coordinate systemRadical (chemistry)DialectCapability Maturity ModelElectric generatorGraphical user interfaceWeb browserData typeFood energyNetwork socketMereologySystem callPhysical lawSemiconductor memorySampling (statistics)Element (mathematics)Image resolutionMultiplication signTelecommunicationProxy serverWorkstation <Musikinstrument>Form (programming)Rule of inferenceEnterprise architectureRight angleLocal ringSet (mathematics)Constructor (object-oriented programming)Product (business)TheoryEvent horizonState of matterBitGraph (mathematics)Extension (kinesiology)JSON
Transcript: English(auto-generated)
NAC micro-modular server DX1000. The code name is Mercury. Well, recent days, previously it's widely used for a variety of purposes.
For example, straight servers, edge servers, feature rich network appliances, embedded devices, home electric appliances, game consoles, et cetera. However, for high performance computing, for high density servers, you use Linux instead of FreeBSD.
I got to introduce about DX1000 high density server. It is a good choice as a FreeBSD-based cluster system. Hi, I'm Daichi, living in Tokyo, Japan. UNGS is my first original IT startup company since 2002.
It's a small company. And BSD Consulting is the next startup company, aiming to support FreeBC for enterprise companies. I am a FreeBC committer since 2002. And I imagine articles of writing since 1998.
Well, this is my activity, about my activity of FreeBC community. Years ago, we began, we have begun a study session of FreeBC. It's called FreeBC Benkyoukai. It means FreeBC seminar, opened roughly every month.
Next, next seminar, going to be for this. It is delightful. I'll keep doing this activity as possible. First, I'm going to talk about my intention of the session. Let's look at next page, supercomputer, top 500 statistical data.
Top 500 is one of the statistical lists of most powerful supercomputers. Top 5,000 lists have been compiled twice a year, since 1993, with help of high-performance computer experts, computational scientists, manufacturers,
and internet communities. The top 500 computers are ranked by their performance on the Leanback benchmark. The main objective of top 500 list is to provide a ranked list of general power systems that are commonly used for high-end applications.
Nowadays, the top 500 is the de facto standard as a supercomputer performance ranking list. So top 5 project was started in 1993. Yes.
As of November 2014, top 500 supercomputers are mostly based on x86-bit CPUs. In recent years, heterogeneous computing, mostly using an immediate graphic processing unit as a coprocessor, has become a popular way
to reach a better performance for water ratio and higher absolute performances. As you can see, Linux is the king of top 500 supercomputers. As of November 2014, 97% of the world's
fastest computers are used in the Linux kernel. Within those 97 are the most powerful supercomputers, including those rankings at top 10. Well, how about the other operating system?
That's Unix. All soldiers never die, but fade away. The BSE-based system, hmm, fading completely. That'll mark OS X. Yes, it's crazy. Well, there is more to the story.
Free BSD. It is completely finished. I think that there are some reasons for Free BSD's defeat of high-performance computing. Historically, Free BSD has been running on consumers' PC or low-spec and low-price Rachman servers for SOFO or 8-server systems.
Devices of high-performance computing have been expensive, but too costly for salmon to buy. Recent days, thanks to the Free BSD Foundation, Free BSD project has been acting with some hardware vendors. But in every stage of this project,
we have few connections with few hardware vendors. On the other hand, there were many Linux vendors that helps Linux learn on high-performance computing systems. We have a few enterprise friends. That was the biggest problem.
To establish a healthy relationship between vendors and Free BSD project, it isn't an easy way, easy work for salmon personally. Are there any ways to improve these circumstances? I believe we have. A key point for improving these circumstances
is information sharing. This is my lesson, I am here. Yeah, recent years, I am working with NEC as a Free BSD developer for BSD consulting in Tokyo. For belly finder, Free BSD works well
on NEC's corporations' latest Rachman servers. The micro-module server DX1000 is the latest NEC's high-density Rachman server. NEC Corporation is a Japanese multinational provider of information technology services and products.
NEC provides information technology and a network solution to business enterprises, communication services providers, and to government agencies. NEC has begun their computer research and development in 1954.
And they produced the first cross-bar switching system in Japan. Back today, NEC built the Earth Simulator computer, the fastest supercomputer in the world from 2002 to 2004. NEC is one of the famous computer banners in Japan.
The micro-module server DX1000 is one of the Express 5800 series machines. Express 5800 server family is the Rachman servers for enterprise customers. Micro-module server DX1000 is high-end model
among Express 5800 series. It is too expensive for salmon to buy. If you or your project team need Free BSD high-density servers, even though DX1000 doesn't merge as a candidate because they're learned English information
about Free BSD and the DX1000. Japan is an exception. Based on consulting, my company opened the verification label on the website. So who could read Japanese can catch Free BSD and the DX1000 easily?
So I hope, who has any chance to use expensive hardware? Please try to test Free BSD work or not. How to install and set up. What feature works or not? And open the result on the internet. To open the, open the shares information of expensive hardware as a big step for Free BSD access.
I gotta introduce NEC micro-module server DX1000. This one. DX1000 incorporates outstanding performance, performance per watt, flexibility, and enterprise class reliability
in extremely dense design. After the 2011 earthquake and tsunami, performance per watt is important criteria of server hardware in Japan. The DX1000, a 2U enclosure system
with 46 Intel processor-based micro-module servers is designed for lightweight scale-out computing such as web hosting and big data analytics as well as cloud service providers. High-density company node with the latest Intel Atom series eight core processors
for DIMM slots and one SSD slot. The DX1000 support operation in 40 degree Celsius environment which minimize cooling cost. Shared fund and power supply designed with 80 plus platinum certified power supply
maximized the power efficiency. All modules and shared components including fans, power supply units, shush management modules, and switch modules are hot-swappable and easy to replace.
The key point of using the DX1000 is to understand its structure and the relationship between modules. The DX1000 is consisted by five modules. One, network switch module. Two, CMM module. Three, server module.
Four, hard disk modules. And five, farm module. In particular, to understand a relationship between switch modules, CMM modules, and server modules is important. If you don't understand its relationships, then you will fail to install FreeBSD.
You get access to network switch modules through two CDR consoles. Two CDR consoles at micro USB are on its front panel. First, you should access to network switch module and make its network structure
and grabbing MAC address information of CMM modules and server modules. Network switch modules is a key point to use DX1000. Next, you access to server modules using IPMA tool.
In a stage, you must set up your own DHCP server and access to the CMM modules and server modules. CMM and server modules are different configured for using DHCP to obtain the IP addresses. Next, access to the server module using the IPMA tool.
Set up BIOS and install previously by Pixiboot. After installed, you could access the server module using SSH. Yeah, this is the photo of DX1000. The DX1000 included two U size. It could include up to 46 CPU modules.
CPU modules must include two CMM module. However, because of a limitation of the power units, to bring it all down to earth, four CPU modules are limitation of CPU modules power enclosure.
You could include 12 hard hard disk modules power enclosure. There are 12 PCI express slots for an enclosure. You could use up to two network switch modules power enclosure. The physical module mount operation
is easily like Lego blocks. Easy push, easy pull. By simple calculation, for example, you could take 60s enclosures power rack. So you can have 608 server modules power rack.
It means you have 4864 slots power rack. If you make 10 beehive guests power server modules, you have 6,008 guests power rack.
If you make 100 jail environment power server module, you have 60,800 jail environments power rack. You can reduce the number of physical servers and their power consumption. And as he says, you can reduce 70% power consumption.
You have up to two network switch modules and each have 240 giga QSFP power adapter. Uplinks are 1,000 base T for management
and 46 2.5 giga down links to server modules. The network switch module are backbone of the DX1000. The network switch modules manage the DX1000
in the network structure. Users can access to each server modules or shipping modules through the network switch module. Server console port on the DX1000 front panel is connected to the network switch module. You can log in into network switch modules
through the server console. Yeah, CMM module. CMM module manage other modules. You must use two CMM modules for enclosure. One CMM module act as active CMM modules
and the other act as a standby module. Standby module does a few works. So it is not completely hot standby module. You need both for enclosure. You can control CMM module using IPM tools, power on, power off, soft reset
to grab wrong information, cell information or sensor's information. On the left one, it's a server module. The server module is very similar to CMM modules. A server module includes one processor into an atom processor C2750 or C2730.
Both have eight cores. The F24, DDR3, 1600 ECC, LBS DMSROTS, maximum memory size is 32 gigabyte,
one micro serial ATA SSSROTS, they're up to 120 gigabyte straight size. IPMI 2.0 BMC port and 2.5 gigabit links to network switch modules. Server module isn't a high performance device.
If you need a high performance node, the DX1000 isn't affordable. To bundle many fixed servers into a rack or computation using many servers like Hadoop is affordable use case for the DX1000. How this module?
2.5 inch serial ATA hard disk. You can choose 500 gigabyte or one terabyte. If we want to use header modules, you should use Intel atom processor C2750 modules. C2730, they can't use hard disk module.
Farm module, you could have up to 10 farm modules per enclosure. The biggest problem of the DX1000 is these farm modules. The farm modules are very noisy. The same is similar to that.
100 Dyson cleaners run in the same room at the same time, they are noisy. Working with the DX1000 is in the same room and is impossible. You should run it in a data center or at least you should run the DX1000 in another room.
The power unit was 160 watt, 80 plus certified hot power supply unit. Yet, the front panel, the DX1000 has two serial consoles. You can access to the network switch modules
through these ports. This is a simple diagram that describes the relationship among server modules, CM modules, network switch module, and a management terminal. By different settings, management run, that IP mature access is only through RJ45 interface.
Accessing to server modules data line is only through QSFP interface. If you want to use RJ45 as data line interface two, you must change the settings using the IP mode two
and login into network switch module and change network structure using SAM commons. I describe the detail at appendix A. This is a detailed specification sheet. The server module itself is straightforward Intel Atom computer.
When you mount the DX1000 to a rock, would you please lead a user guide carefully and take operation? The DX1000 is lighter than the other similar products but it is very heavy. So when you take manual work, please do carefully.
The server module is straightforward atom machine. The freebies can handle very easily. However, installation is a little bit confusion. You must count the DX1000 and set up your own DHCP network.
The DX1000 has two network switch module. You must set at least the first network switch module. First step, you should update MAC address of the CMM modules. You can obtain those information
from network switch module. So log in into the network switch module through CIRA console on the right side on the front panel and type some commands. CIRA console's bolt rate is 115200 bps. Different user ID and password for the both admin, all in small capital.
Left bolt is connected to the first network switch module. Right bolt is connected to second network switch module. The USB USB cable is included as a DX1000 box. You can log in from FreeBSD terminal using CU command like this.
Change the CUAUZ to your environment. The option S means bolt rate. The switch space greater than is a prompt of network switch module.
You can change network structure using a command through that prompt. Next, you obtain MAC address of the network switch module and correct rate MAC address of the CMM modules from that. A little bit confusion point.
Enable and type show system. The show system command shows the information about network switch modules. In this case, 74D435E9E262 as MAC address of network switch module itself.
You can correct the MAC address of a CMM module from this MAC address. An address sub-directed one from MAC address of network switch module is a MAC address of O-N-S. An address sub-directed two from MAC address of the network switch module
is a MAC address of a CMM module. In this case, bra-ha-ha-ha-6-0 as an address of a CMM module. And you should obtain IP address assigned to MAC address of a CMM module. The CMM module different network setting is a DHCP,
so the CMM module have IP address already. If you use a FreeBC as a DHCP server, log in into the FreeBC DHCP server and obtain IP address by ARP command like those. In this case, IP address of CMM module is
one nine, one nine two, one six eight, one two nine. Okay, next. You should obtain management portals, MAC address of the server modules. So unfortunately, the managed ANOVA-NICRO is common.
That is included in DX100 utility disk is sent always six binary. You must set up sent always six to obtain the information. I think that sent always on beehive is one of the way. You give the CMM modules IP address to action I,
you can obtain MAC address list of server modules on the enclosure. The same way, an address subtracted one from the MAC address of the server modules management port is MAC address of server modules NIC number two.
An address subtracted two from MAC address of the server modules is MAC address of the server module NIC number one. Very confusing point. Well, so we grabbed it, obtained the MAC address list.
An IP address and a MAC address, write DHCPd confire to assign IP address to CMM modules and server modules. You need configuration for pixie boot and NFS too.
The blue lines are the IP assignment to CMM modules and server modules. The orange lines are configuration for pixie boot and network file system. Previously, pixie boot takes three steps.
First step is DHCP. A pixie host tries to obtain IP address and TFTP information from DHCP server. The second step, the host tries to roll the pixie boot kernel from a TFTP server. And third, the host tries to roll the installer
from NFS server. So you should set up a TFTP server and NFS server each collectively. Free vc based system has no DHCP server, so install ISC DHCP server from package and set up. To enable DHCP server, write DHCPd enable es
into rc.conf file. And edit ISC DHCP server's configuration file, dhcp.conf. Look at two slides back. The pixie boot kernel is loaded using TFTP. The TFTP server is launched by INETD server,
so add line INETD enable es into rc.conf file. Next, edit INETD conf file. Remove a comment of TFTP line. The file name pixie boot is defined in dhcpd.conf. The file name must be the same.
In this case, pixie boot file passes the thrush TFTP boot thrush md64 thrush 10.1 thrush pixie boot. The pixie boot kernel file under that directory. After pixie boot kernel rolling, pixie boot kernel tries to roll install using NFS.
Add those lines, add those lines into the rc.conf file and edit export file correctly. In this case, the home pixie3bc is a loop pass of pixie installer contents. This pass is defined in dhcp.conf file two.
The pass name must be the same. Another home pixie3bc directory, you need installer contents. Most easiest way to deploy install content is extract a base.txt included in the installer images. The mdconfig command can make a device file
from the installer ISO image file, mount it, and extract base.txt file into home pixie3bc directory. Next. Next step is important. To enable CLL on LAN by default,
edit installer contents before installation. First, add those CLL console configuration into slash home slash pixie3bc slash et cetera, slash load at the confine, and the boot, slash home slash pixie boot, slash boot.
Oh, this is long. My God. Boot load at the confine. And next is the home pixie boot, et cetera, and ttys. So chain the line ttyu2, unsecure to unsecure.
Network switch module running, same module running, and pixie boot environment works well. Next is installation. The power on the target submodules
using the ipmf2 like this. And connect to the submodules through CLL on LAN using ipmf2 commands. By CLL on LAN, you can manage your submodules
like a common PC from your terminal applications like those. When this setup screen type function two and into BIOS settings screen. This is BIOS settings screen of submodules.
You can set a BIOS through ipmf2. Select the boot menu and change boot sequence, sequence order. The pixie boot is the first priority boot way. Save configuration and reboot. This is a pixie boot sequence screen.
IP address is assigned. And trying to load pixie boot kernel using TFTP. A pixie boot kernel loaded. On the boot sequence, you should type terminal type
bt100 or x term as an adequate choice. After this screen, install operation is the same as common PC. Well, installing about 100 submodules by hand isn't a reasonable idea.
Custom installer and installation work automatically is reasonable way. Conclusion. So, energy micro-module server DX1 Saturn is affordable as Hadoop clusters.
High computational works using many hosts or to battle many physical servers into a rack. Low power consumption and high space efficiency raises a lot of learning costs. The conclusion, you can use previously on the micro-module server DX1 Saturn. However, the installation is a little bit confusion.
To understand the relation and the structure of each modules helps you to do that. Submodules are straightforward Intel Atom machine. After installation, it works well. I have two appendix. First is about QSFP as a data run.
Camera, people don't have QSFP interface. As submodule have three NIC port, the two of those are data run
and the other for management run. A network switch module has three interfaces too. Two of those are QSFP interface and the other is RJ45. By default, data run of submodule connected to QSFP of network switch module
and the management run of submodule connected to RJ45 port of network switch module. In some cases, you want to use RJ45 interface for both data run and management run. And it's possible. But a little bit confusion way.
After boot, first change configuration by using ipm2 like those. There are vendor specific commands to change the network structure.
And next, login into network switch module through serial console on the front of the DX1000. And type commands like those. That's it. So this change will destroy after reboot the DX1000.
So if we want to RJ45 for the both data run and management run, which consider to use old typing feature of any terminal applications. Always by hand, yes.
Not reasonable. Yes, at last. If you want to buy DX1000, then no problem. NIC has contact point at Yulo, US and Asia Pacific regions.
Who want to buy this device? Hands up. North America, NSE Corporation of America is contact point. Europe, NSE Enterprise Solutions is there. And Asia Pacific regions, NSE Corporation is access point.
Yes, my story is down. Any questions? Yeah.
It should be hardware certified on previous video. Other questions? Yes. So it's there. That function does a multi-capable switch across all of the hardware servers.
All of the server modules participate in being able to get access to the switch modules? Yes. You can set up various server modules in VLANs or whatever you're in? Yes, yes.
You can construct any types of networks through the network module. Yeah? Have you tried to bring up X10 on it? Bring the what? The UI, except CE or anything like load-free, did you try to bring a graphic interface on it? I don't know, the graphic interface.
And this person said that CLI only. Which we have not seen or know nothing about. Each module has BMC port, so you can control through the web browser
and the web's UI.
Good question. It depends on the sales. Yes, big sale, this product. NSE developed the next high-performance version
of this product. But sales. Single socket, how many cores?
One CPU core, each server modules. Single core, that is eight core modules, eight core.
So it's only from memory, it's eight core and you can link up to 46 of these machines on a high-density environment. On a very small footprint for a cloud and a bot.
About 46 modules. 46 modules. Yes, that is sales point, that is not realistic answer. So anyone can't use a 64 modules at the same time because of the limitation of the power unit.
So that, yeah, 40, 40 is the realistic limitation. 40, yeah. And that includes two CMM modules, so 38 server modules per eye enclosure is a limitation.
Any other question? Yeah, thank you very much. Thank you.