Is the next President of the United S...

Feedback.pdxradio.com message board: Archives: Politics & other archives: 2007: April - June 2007: Is the next President of the United States running Linux?
Author: Andrew2
Thursday, June 28, 2007 - 10:55 am
Top of pageBottom of page Link to this message

View profile or send e-mail Edit this post

Very interesting article:

http://www.douglaskarr.com/2007/06/23/2008-elections-by-server/

Of the Dems, only Hillary is using Microsoft server. Most Republicans are.

Obama is using Pair.com, my former web host, using FreeBSD (which like Linux is free but developed entirely differently - not open source). I use FreeBSD myself on my current server. Maybe I'll vote for Obama now!

Andrew

Author: Nwokie
Thursday, June 28, 2007 - 10:57 am
Top of pageBottom of page Link to this message

View profile or send e-mail Edit this post

Thats because Linux is a toy, its not a stable operating system. Linux is only good for mail servers and things like that.

Author: Darktemper
Thursday, June 28, 2007 - 11:23 am
Top of pageBottom of page Link to this message

View profile or send e-mail Edit this post

Poor ol' George W, he's still using Dos 6.2 and Windows 3.1! He'd be lost without Packard Bell's Navigator....Boy Howdy!

Author: Missing_kskd
Thursday, June 28, 2007 - 11:38 am
Top of pageBottom of page Link to this message

View profile or send e-mail Edit this post

Dude, you have no clue. Seriously.

Where core computing is concerned, Unix is the best there has ever been period. The Linux implementation of Unix is solid these days. It scales from a small dedicated micro-computing application all the way through to 1024 CPU Numa clusters (SGI).

BTW: A NUMA machine is one where there is exactly one copy of the operating system, operating with N CPU's. One OS image, you log in and ask for a resource count and the machine reports back 1024 processors, 20TB of RAM, Exabyte disk systems, thousands of users, etc... To date, nothing even close has been done on a Microsoft OS. That's Linux and it's been playing there for years.

You know, real computers, not freaking toys!

I'm not gonna write anything else. It's one of those, if you don't know, then there is no help kind of things.

Sorry.

Andrew, good read! Thanks for sharing. It's interesting about who is using what. I think that does say something about the candidate that's important where tech issues are. Having some aspect of their computing be open, likely means some representation or at least mind share where potential legislation is concerned.

Author: Andrew2
Thursday, June 28, 2007 - 11:48 am
Top of pageBottom of page Link to this message

View profile or send e-mail Edit this post

Well, if Linux is a "toy" it's sure got a lot of smart people fooled. Billions of dollars of ecommerce revenue are being generated every year...on Linux-based servers. Intel Corporation uses hundreds of Linux machines to simulate its new microprocessor designs. IBM and Oracle sell it to their customers.

If you are talking about a desktop computer, then I'd suggest you look at Ubuntu Linux, which Dell is now offering on some of its computers. I've got Ubuntu on my Dell laptop (dual-boot with Windows XP). Ubuntu installs with Firefox and Open Office by default. It's extremely easy to use and very stable. Windows XP is pretty stable too but certainly crashes on me every once in a while. I don't think many people associate "Windows" and "stable" together.

By the way, my FreeBSD web server (from which tens of thousands of people have viewed my photographs in the last few years) has been running for 616 consecutive days, without being rebooted.

Andrew

Author: Nwokie
Thursday, June 28, 2007 - 2:55 pm
Top of pageBottom of page Link to this message

View profile or send e-mail Edit this post

I have several copies of Ubuntu in our engineering dept, they give me much more problems than the various Microsoft products, everytime the updates are installed, I have serious problems with the interfaces into the ERP system.

Neither SAP or BAAN support linux desktops, and for a major corporation that is a deal killer.

Author: Andrew2
Thursday, June 28, 2007 - 3:11 pm
Top of pageBottom of page Link to this message

View profile or send e-mail Edit this post

I'm not sure which versions of Ubuntu you have in your engineering dept, but the latest, 7.04 "Feisty Fawn," is quite good. Still, for engineering work, you might want to follow the lead of other companies like Intel and get professionally-supported versions of Linux like Suse or Red Hat. Fortunately, you have a wide range of choices for Linux distros and support levels.

Sure, some software does not run on Linux but that's not Linux's fault - blame the people who choose not to support that software on a Linux platform. That doesn't make Linux a "toy."

Andrew

Author: Nwokie
Thursday, June 28, 2007 - 4:03 pm
Top of pageBottom of page Link to this message

View profile or send e-mail Edit this post

I use SUSE on my mail server, FTP server and a couple of other servers.

The eng dept has come up with a proposal to use PXE boot (diskless) workstations (40) for manufacturing, and in their specs they require Ubuntu, I came very close to quiting, until the CEO gave me permission to modify the specs, however I felt best.

The problem is, they are only usint them to run Firefox, to bring up a web site that has the procedures for building various parts. I have put together 5 machines, all identical, and they all display the site differently.

And the diskless ones are very slow. Even across giga Byte lines.

Author: Andrew2
Thursday, June 28, 2007 - 4:12 pm
Top of pageBottom of page Link to this message

View profile or send e-mail Edit this post

I don't know how you built the machines or why they are slow, but it doesn't sound like a Ubuntu problem necessarily. I don't have any experience running diskless workstations, but I can well imagine they might be slow. There may be ways to cache and tune them so they don't saturate the network connections or something, I don't know.

Linux support is different from Windows support, but there are often Wikis for deploying soulutions to things like diskless workstations that perform well. There are ways to do it poorly, with Windows or Linux.

I'd probably consider something besides a diskless solution. How about something based on a CF (compact flash) card, if you don't want the reliability issues of a hard disk? I just ordered some CF to IDE adaptors from a company in Hong Kong for $2 each, and I can get 2GB flash cards for as little as $15 each. I'm definitely going to deploy one for my home firewall, so it doesn't have to rely on a hard disk.

Andrew

Author: Nwokie
Thursday, June 28, 2007 - 8:13 pm
Top of pageBottom of page Link to this message

View profile or send e-mail Edit this post

Money isnt really part of it, they gave me 150 thousand for the project.

The problem is , they want to use NFS to run applications, the OS is loaded into memory.

After vacation, I'm going to rebuild the server, with multiple NFS areas. to support gthe machines, and probably build 2 to 4 servers.

Point is, for about 40 thousand, I could have had this up and running using MS as the operating system for the clients, the server I dont care. Linus is great for low end servers, its using it for clients I have problems.

Author: Missing_kskd
Thursday, June 28, 2007 - 8:31 pm
Top of pageBottom of page Link to this message

View profile or send e-mail Edit this post

So why not just use a beefy server and have the clients either run VNC desktops, or use the X window system to display the browser on screen? This is a Unixey thing.

Client machines then do almost nothing but display data from network. Browsers run on server. If practical, data also exists on server, shared via NFS / SAMBA, for updates, etc...

For most content, the X window system will likely be the better deal.

Does it only run the browser? If so, users need do nothing but either enter an address, or nav from a home page coded into the logon script that starts browser.

If so, configure the client to boot it's kernel from flash, then launch a root X window, then rsh, ssh, whatever your poison, the browser full screen, from the server.

No log in, nothing. Turn it on, browser appears.

A 64bit linux server will hold more than enough RAM. Depending on concurrent users, you might need more than a dual CPU. That's an option too. For 50 browser users, I do not think 64 bit will be needed.

Clients need nothing but flash, some RAM (maybe 1/2 GB), no disk, nice 2D graphics card. (Matrox is excellent for this kind of thing)

Keep flash image on server, write small script to push updates to clients on demand for updates, etc...

Clients can use one global user logon, or multiple ones, depending on the structure of the data. Permissions, access, etc... Take a look at XDMCP.

Create all user accounts on server, with environment re-written for remote display. Logon script for user account, captures client IP, sets X window display automatically, starts browser, from there all keyboard / mouse and screen I/O runs over network.

Server needs 1 or more 1000/T network interfaces. Clients should use 100/T, barring large content.

Use switches to regulate network traffic flow, to avoid difficult load balancing issues.

DHCP for clients, unless number of users is low.

Server has a public IP, suitable for engineering content updates, administration, etc... Also has private IP's (maybe more than one) to serve pools of users. 20 users per 1000/T on server is good rule of thumb. Browser content may impact this. So, two interfaces for shop floor users, two subnets, a coupla switches and the load and latency issues are taken care of automatically.

Adding new clients a simple matter of assembling hardware, flash drive, etc... Connect cables, power on and let it go.

All admin then happens from server, handled remotely via SSH / hardware VPN. Remote client test is possible, but slow, via same X window system.

One advantage of this type of configuration is that users have no direct access to data! Another advantage is any scripting, HTML, and general systems admin, logging, etc... all exist on server. Make one change, all clients see change.

Client admin consists of mounting flash image as loopback file system, make changes as if local disk. Use script to push to flash cards on clients.

All clients completely portable.

Once client issues are worked out, they require zero Linux updates. They are display and input terminals only.

If multiple applications are necessary, launch them from browser home page, displayed at user logon, or launch minimal window manager, again from server, to display a few icons, no menus and handle window focus issues.

Depending on your security mindset, such a system could be closed, meaning so long as the content is appropriate for the deployed open software stack, updates are likely not indicated for server either.

Author: Nwokie
Friday, June 29, 2007 - 10:21 am
Top of pageBottom of page Link to this message

View profile or send e-mail Edit this post

I know of lots of ways to give them a browser, heck I could have bought X boxes. But the specs that I have to follow, say diskless workstations, using Ubuntu booting through PXE, with apps running onan NFS drive.

I could load Ubuntu on flash drives, CD's etc.

I currently have 5 running to spec, and 5 with hard drives and 2 with flash drives, the hard drives and flash work acceptable. The diskless versions work ok, if only 2 are running the app, once I hit 3, performance degrades.



I currently have 5 working through

Author: Missing_kskd
Friday, June 29, 2007 - 10:56 am
Top of pageBottom of page Link to this message

View profile or send e-mail Edit this post

Understood.

So, why are they controlling the specs?

I understand calling out a Linux, browser content, diskless, etc...

Those are core things, that impact the project, can be associated with business decisions.

What I don't understand are the limitations on implementation details. Really, these goals should be abstracted somewhat to permit an implementation that actually leverages the technologies being so specified.

Given what you posted, then some changes could still yield a solid plan.

So, the machines remain diskless. During startup / shutdown network I/O is a significant load. How then to address that?

Are the machines to be powered up and down daily, or can they remain booted for an extended period of time?

If allowed to remain booted, this makes startup / shutdown a very minor issue. If not, some discussion about network capacity and expectations is in order.

Running the applications over NFS is a clear attempt to centralize administration, but does not properly leverage both the X window system and the multi-user aspect of UNIX, and this is where I believe the core troubles lie.

A server, equipped with a nice amount of RAM is gonna manage all 50 browser users period. Let's just say it's a 32 bit machine, with 4GB of RAM and enough disk to handle the content and boot images for the clients with no hassles.

From there, instead of running the applications on the clients, run them directly on the server. UNIX servers are capable of serving applications from a compute standpoint, not only a file sharing standpoint.

What's going to happen in this scenario is latency due to network traffic is going to be considerably reduced, and the goal of central administration will still be achieved, which is the goal engineering is trying for.

If, the content can exist on the server as well, and be shared out via one of the network interfaces (The plan should involve three total: 1 for corporate access, 2 for client machine access), then the only traffic on the network would be booting clients, display and user input, and perhaps printing.

If one factors out the boot traffic, the X window traffic is sporadic and likely to be a non-issue given 20 or so users per network interface. I'm willing to bet, having done this exact scenario with high end modeling software, served from a server, that number can be pushed a bit for browsers only.

IMHO, having this discussion, combined with a coupla benchmarks will put this matter to rest. Application data serving, via NFS, brings with it significant latency issues, particularly when both the application and data must travel the network, being served from different machines in the worst case.

In Linux, the XDMCP protocol is easily configured to allow the server to present logons for any number of users, all maintained on the server machine.

Additionally, choice of window manager can significantly impact this as well. Running a simple window manager, instead of KDE / GNOME, can both limit the user access to programs not the focus of the project, and keep disk boot images and startup processes to a minimum for a client.

(continued)

Author: Missing_kskd
Friday, June 29, 2007 - 11:14 am
Top of pageBottom of page Link to this message

View profile or send e-mail Edit this post

The last time I setup a system like this, there was considerable confusion about what was running where and why.

I strongly encourage you to consider the following benchmark for performance.

Configure a few user accounts on the server machine. Log into the clients and request an SSH session with X forwarding turned on. On the client machine, open a terminal window and enter xhost+ to disable the X window security features, then enter the SSH command for testing.

(for implementation, you can just modify these if it's really a big deal, for now, it only needs to be off.)

ssh -X -C servername -U username

This means, forward xwindow display requests to client machine, compress display data, authenticate as a particular user.

Do this for all 5 clients, such that you have terminal windows, running on the server displayed on each one.

Test the display forwarding with xcalc. If the little calculator appears, close it and know everything is gonna work.

If the little calculator does not appear, it will tell you why, and you can set the display environment variable to the IP address of the clients.

(For implementation this can be handled in the user logon script. It captures the client IP and sets the environment, no biggie.)

setenv DISPLAY=[clientip]:0.0

This is the command you will need to establish the right display variables.

Tinker with this until you get the little calculator to display on screen.

Then launch the browser (typically this can be done by typing firefox, but it may vary...).

Then browse some of the heaver applications with the clients. You should experience far fewer latency and general speed issues. Overall performance should be moderate for the first few launches, then taper off to a steady state for the remaining ones, until RAM becomes an issue on the server. Top will tell you when this happens, and choices can be made accordingly.

On the server, run TOP to take a look at memory consumption. What should happen is the first few browser sessions will consume some system RAM. After that, the server caches the code in it's RAM and multiple user sessions will just run it.

(and that's the key right there.)

If you want, launch multiple browsers from one client, and or launch multiple user accounts via the SSH to get your test up to a respectable number of potential browsers, all displaying content.

As long as you've given the xhost+ command, you are completely free to open up terminal windows and access the server, with as many user accounts as you want, displaying as many applications as you want.


Essentially, a UNIX display is a multi-user thing. Applications are free to run anywhere, display anywhere, accept input from anywhere and consume / produce data from anywhere.

For this use case, unless there are other applications in the mix, that have a heavy foot print RAM wise, having the server run the browsers and access the content is a far more potent solution than having clients do it from shared application directories via NFS.

Author: Nwokie
Friday, June 29, 2007 - 12:11 pm
Top of pageBottom of page Link to this message

View profile or send e-mail Edit this post

I think the problem is running an app over NFS, I'm going to try and build the browser into the kernel, which runs from local memory.ld be done in about 4 hours.

My problem is, I dont really have the time to spend on a project, that should be done in about 4 hours.

Author: Missing_kskd
Friday, June 29, 2007 - 6:46 pm
Top of pageBottom of page Link to this message

View profile or send e-mail Edit this post

I agree.

How did the kernel trick work?

Author: Nwokie
Friday, June 29, 2007 - 6:50 pm
Top of pageBottom of page Link to this message

View profile or send e-mail Edit this post

Havent got to it yet, I am on VACATION!


Topics Profile Last Day Last Week Search Tree View Log Out     Administration
Topics Profile Last Day Last Week Search Tree View Log Out   Administration
Welcome to Feedback.pdxradio.com message board
For assistance, read the instructions or contact us.
Powered by Discus Pro
http://www.discusware.com