At ISM, the safety and security of our employees, customers and local communities is our utmost priority and our thoughts and prayers are with those impacted by the Coronavirus.  As of today we have no employees or family infected with the virus and our business operations are normal.

In an effort to do our part to put a stop to this virus outbreak we have implemented very strict guidelines for all products that we receive and ship.  All boxes are received by our team members using sanitary gloves and are immediately disinfected upon receipt.  Then our team is disinfecting all machines and parts that we receive and again disinfecting all machines and parts upon shipment and we are packing in new clean boxes.

This is to ensure that our employees remain protected from the virus and that all parts we ship are clean and virus free when they leave our technical center.

We have also locked down our facility and we are not receiving any visitors to ensure that our facility remains virus free.  Our team members here at ISM are practicing all social distancing and cleanliness guidelines instructed by the Center for Disease Control: https://www.cdc.gov/coronavirus/2019-ncov/community/organizations/businesses-employers.html 

 
We have instructed non essential employees to work from home during this outbreak and those members who are coming to work to process your orders are not visiting stores or restaurants and going straight from their homes to work, then back home again.  To try to stop the unprecedented spread of this virus we will provide our team with household goods   that they may need during this outbreak so they do not have to visit stores.
 
FedEx, UPS, and DHL are experiencing shipping delays and there are ongoing supply chain disruptions.  Our team continues to work diligently to process all orders on time, but please understand that some shipping delays may occur due to the problems FedEx, UPS, and DHL have reported.   
We are very thankful for our partnership and friendship with you and your company.  Please let us know if there is anything we can do to help you,  your company, or your families during this difficult time.  If household or cleaning supplies are limited in your area, we can source goods locally and ship to you so you have supplies for your families and friends.  Please let us know how we can help you and we pray you all stay safe during this virus outbreak.Together we will all get through this and we look forward to working closely with you in 2020 and for many years beyond!

With the recent announcement of security vulnerabilities found on microprocessors IBM has released the following statement.  They will be releasing patches but the best and most important security you can apply to your systems is a good firewall.  Contact us on the offerings we have from Palo Alto.  Here’s the word from IBM today:

“On Wednesday, January 3, researchers from Google announced a security vulnerability impacting all microprocessors, including processors in the IBM POWER family.

This vulnerability doesn’t allow an external unauthorized party to gain access to a machine, but it could allow a party that has access to the system to access unauthorized data.

If this vulnerability poses a risk to your environment, the first line of defense is the firewalls and security tools that most organizations already have in place. Complete mitigation of this vulnerability for Power Systems clients involves installing patches to both system firmware and operating systems. The firmware patch provides partial remediation to this vulnerability and is a pre-requisite for the OS patch to be effective. These will be available as follows:

  • Firmware patches for POWER7+, POWER8 and POWER9 platforms will be available on January 9. We will provide further communication on supported generations prior to POWER7+,  including firmware patches and availability.
  • Linux operating systems patches will start to become available on January 9.  AIX and i operating system patches will start to become available February 12. Information will be available via PSIRT.

Clients should review these patches in the context of their datacenter environment and standard evaluation practices to determine if they should be applied.”

It’s better to avoid a security breach — than hunt for excuses afterwards.

When network security is breached, many companies scrabble around for answers. Who’s to blame? What did they do? Will it happen again? But by then it’s often too late and damage can be devastating.

We’ve seen huge fines from public sector companies subject to data breaches. For small and medium size firms, the problems can even be terminal – over 60% of companies that experience a data loss go out of business after 6 months.

So find out what’s really happening with your network traffic. Get a free Security Lifecycle Review (SLR). We run these in conjunction with Palo Alto Networks, a Gartner Magic Quadrant leader and next generation security company who are working to maintain our lifestyles in the digital age and protect businesses like yours.

CLAIM YOUR FREE SLR
We will give you a FREE report as part of your review that will clearly show you:
Which applications are in use, and the potential risks to exposure
Specific details on ways adversaries are attempting to breach your network
Comparison data for your organisation, versus that of your industry peers
Actionable intelligence – key areas you can focus on immediately to reduce your risk exposure.

Claim your Security Lifecycle Review – without obligations

Palo Alto Networks

With your users expecting full access to the enterprise network on their mobile devices, have you implemented a security plan?

Nathan Wenzler has some great advice here, talk to one of our experts who can help you refine and implement your solutions from our security partner, Palo Alto:

Organizations with mobile workforces face serious challenges when it comes to their overall cybersecurity posture. As more users leverage laptops, tablets, smartphones and other portable devices, security risks begin to increase in three areas which can be simply categorized as:

  • What users bring in to the environment
  • What users take out of the environment
  • An overall increase in scope of what can be attacked

Looking at the risk of “what users bring in to the environment”, companies must deal with devices being attached to their corporate networks which have also connected to a user’s home network, public Wi-Fi hotspots and any number of other unsecured networks. These systems are likely not as well protected as those governed by enterprise-class endpoint security tools, and thus, run a much larger risk of being infected with malware, viruses, ransomware, worms and other malicious programs used by attackers. When a user’s compromised device is connected to a corporate network, it introduces the potential for these malicious tools to launch more attacks against the other devices on the network, or serve as a point of entry for a cybercriminal to the network, bypassing all perimeter defenses. There are many strategies that can be employed to defend against this sort of problem, including, but not limited to:

  • Set strong policies which require that devices connected to the corporate network have endpoint protection software which is up to date and that systems are fully patched
  • Create wireless networks which are available for user’s non-work systems which they can utilize for Internet access and other functions without allowing them to be connected directly to the internal corporate network
  • Develop Internet-facing services for email, messaging and other basic corporate functions which users can access remotely without need of internal access
  • Assign corporate-owned mobile devices to users, instead of allowing personally-owned devices, which have the same endpoint protection software, access controls and other corporate governance as any other device on the internal network

As for “what users take out of the environment”, trying to keep classified or critical, proprietary data safe is a primary need of any organization, regardless of their vertical. Intellectual property theft is a very real problem for almost any organization, and even in areas where it may not seem as obvious. Take universities and other organizations in academia, where research papers and doctoral theses can generate millions of dollars in revenue from grants, government investment or corporate efforts to license the findings for commercial purposes. Users who have access to this kind of critical data could easily copy it to unsecured mobile devices and transport it out of the protected network, compromising the data and potentially impacting the organization for large amounts of revenue. To protect against this kind of data loss and theft, organizations must have strong access controls around who can access information stored across their network, adopt Least Use Privilege policies to ensure that only the users who must have access, do, and for complex access requirements, consider implementing Data Loss Prevention (DLP) solutions which can provide a wide array of logging, tracking, access control, and other data access functions which can prevent a user, whether authorized or not, from exfiltrating critical information out of the environment.

Finally, when organizations begin to expand their workforces outside the confines of a well-controlled network housed in physical office locations, the more common, outdated types of defense strategies start to become difficult to implement and manage. Notions of a traditional Internet perimeter where a firewall can block out unwanted external traffic simply disintegrates when put into practice in today’s cloud-based and hybrid environments, and network admins now must wrestle with huge numbers of mobile devices all over the globe which are accessing corporate resources and are being connected to public and unsecured networks. This means that the potential number of devices which hackers can attack goes up dramatically, and the ways in which they can be protected starts to shrink.

It’s imperative that organizations find security solutions that will scale up alongside not only the sheer volume of additional devices being used, but the scope of where and when these devices are used to perform work. Leveraging cloud-based technologies to store data centrally can be one option, provided that sufficient technological controls and legal protections are in place. Additionally, more and more security vendors are providing strong cloud-based solutions which can scale up quickly and easily to identify and protect your devices wherever they are in the world and provide centralized management functionality to your internal IT staff responsible for controlling these assets.

While there are a number of challenges for all organizations as they move to and utilize a more nimble and mobile workforce, with proper planning, strong controls and using scalable cloud-based security technologies, they can reduce their overall risk of loss while dramatically increasing the security posture of the environment as a whole.

http://www.csoonline.com/article/3216469/mobile-security/security-on-the-move-protecting-your-mobile-workforce.html

Good synopsis here http://patentlyo.com/patent/2015/02/devices-yoda-2015.html from Dennis Crouch on the You Own Devices Act which will empower you as the owner of your system to get value for the investment in the system software when you go to sell to us.  Please contact your local representatives to voice your support for this act.  Here’s Dennis’ write up:

You Own Devices Act (‘YODA’) of 2015

Reps Farenthold and Polis today reintroduced the You Own Devices Act (‘YODA’) that I discussed in September 2014.  The provision attempts a statutory end run against end-user-license-agreements (EULA) for computer software.  The current and growing market approach to software to license rather than sell software.  That approach cuts-out the first-sale (exhaustion) doctrine and allows the copyright holder to limit resale of the software by the original purchaser and to impose substantial use restrictions.  That approach is in tension with the common law tradition of refusing to enforce use or transfer restrictions.  However, a number of judges have bought into the idea that existence of an underlying copyright somehow requires the favoring of “freedom of contract” over the traditional unreasonable-restraint-of-trade doctrines.

YODA addresses this issue in a limited way – focusing on transfer rights – and would provide someone transferring title to a computer with the right to also transfer an ‘authorized copy’ of software used on the computer (or transfer the right to obtain such copy).  That right would be absolute – and “may not be waived by any agreement.”  Even without the proposed law, courts and the FTC should be doing a better job of policing this behavior that strays far from our usual pro-market orientation. However, the provision would make the result clear cut.

In some ways, I think of this provision as akin to the fixture rules in real property — once personal property (such as a brick) is fixed to the land (by being built into a house), the brick becomes part of the land and can be sold with the land. In the same way, a computer would come with rights to use all (legitimate) software therein.

To be clear, YODA would not allow transfer of pirated software, but would allow transfer in cases where the owner has a legitimate copy but is seemingly subject to a contractual transfer restriction.

 

Farenthold is a Texas Republican and a member of the IP Subcommittee of the Judiciary Committee.  On Twitter, Farenthold quipped: “Luke didn’t have to re-license Anakin’s lightsaber, so why should you?”

lego-star-wars-games-to-play-on-computer-1oli8qnz[1]

About Dennis Crouch

Law Professor at the University of Missouri School of Law

Big Data has been the buzz word in play for some time and IBM is hopping on this trend as shops try to get a handle on how to handle “Big Data” in their operation.  IBM has announced their new Power Systems servers built on the Power8 processor are the perfect way for all to handle the needs posed by the hot buzz in the market.  It’s a welcome addition to the product line that is sure to move some organizations to upgrade to take advantage of the newest technology.  But is this latest machine a need for your organization, or are you better served not biting on the latest offering from IBM who is trying to capitalize on the buzz around Big Data?  Contact us for details on how a Power7 or Power6 processor may be the necessary upgrade you need that saves you up to 90% off of IBM list price.  In the meantime, here’s the news from IBM on the Power8 based offerings:

ARMONK, N.Y. – 23 Apr 2014: IBM (NYSE: IBM) today debuted new Power Systems servers that allow data centers to manage staggering data requirements with unprecedented speed, all built on an open server platform.  In a move that sharply contrasts other chip and server manufacturers’ proprietary business models, IBM through the OpenPOWER Foundation, released detailed technical specifications for its POWER8 processor[1], inviting collaborators and competitors alike to innovate on the processor and server platform, providing a catalyst for new innovation.

Built on IBM’s POWER8 technology and designed for an era of Big Data, the new scale-out IBM Power Systems servers culminate a $2.4 billion investment, three-plus years of development and exploit the innovation of hundreds of IBM patents — underscoring IBM’s singular commitment to providing higher-value, open technologies to clients. The systems are built from the ground up to harness Big Data with the new IBM POWER8 processor, a sliver of silicon that measures just one square inch, which is embedded with more than 4 billion microscopic transistors and more than 11 miles of high-speed copper wiring.  

“This is the first truly disruptive advancement in high-end server technology in decades, with radical technology changes and the full support of an open server ecosystem that will seamlessly lead our clients into this world of massive data volumes and complexity,” said Tom Rosamilia, Senior Vice President, IBM Systems and Technology Group. “There no longer is a one-size-fits-all approach to scale out a data center. With our membership in the OpenPOWER Foundation, IBM’s POWER8 processor will become a catalyst for emerging applications and an open innovation platform.”

You can read the rest here:  http://www-03.ibm.com/press/us/en/pressrelease/43702.wss

Till next time!

Interesting news here out of IBM.  They have licensed a Chinese manufacturer to be able to make their own version of the forthcoming IBM Power8 Chip.  Very interesting development for the market and it’ll be interesting to see how this changes the IBM Power landscape: http://www.enterprisetech.com/2014/01/21/chinese-startup-make-power8-server-chips/

IBM has added another member to its OpenPower Consortium, which seeks to expand the use of Power processors in commercial systems. The Chinese government has made no secret that it wants to have an indigenous chip design and manufacturing business, and the newly formed Suzhou PowerCore aims to be one of the players in the fledgling Chinese chip market – and one that specializes on Power chips.

The details of the licensing agreement between the OpenPower Consortium, which is controlled by IBM at the moment, and Suzhou PowerCore are still being hammered out.

The OpenPower Consortium was founded in August last year with the idea of opening up Power chip technology much as ARM Holdings does for its ARM chip designs. Thus far, search engine giant Google, graphics chip maker Nvidia, networking and switch chip maker Mellanox Technologies, and motherboard maker Tyan have joined the effort. In December, the consortium just got its bylaws and governance rules together and had its first membership meeting.

Brad McCredie, vice president of Power Systems development within IBM’s Systems and Technology Group, tells EnterpriseTech that the arrangement with the OpenPower Consortium gives Suzhou PowerCore a license to the forthcoming Power8 processor, and will allow the startup to tweak the design as it sees fit for its customers as well as get the chips made in other foundries as it sees fit.

Initially, Suzhou PowerCore will make modest changes to the Power8 chip and will use IBM’s chip plant in East Fishkill, New York to manufacture its own variants of the chips. The timeline for such modifications is unclear, but McCredie said that, generally speaking, it can take two years or more to design a chip and get them coming off the production line. Presumably it will not take that long for Suzhou PowerCore to get its first iteration of Power8 out the door, particularly given that IBM will have worked the kinks out of its 22 nanometer processes as it rolls out its own Power8 chips sometime around the middle of the year. The chip development teams of Suzhou PowerCore and IBM are working on the timelines and roadmaps for the Chinese Power chip right now.

The Chinese Academy of Sciences has six different chip projects that it has help cultivate in the past decade in the country. The “Loongson” variant of the MIPS processor, which is aimed at servers and high performance computing clusters, is one chip China is working on, as is a clone of the OpenSparc processor that was open sourced by Sun Microsystems before it was acquired by Oracle. This latter chip, named “FeiTeng,” has been used as adjunct processors in service nodes in the Tianhe-1A massively parallel supercomputer. The Loongson chips are in their third generation of development and are expected to appear in servers sometime this year.

So why would China be interested in a Power chip? “China is very large, and it has the resources to place more than one bet,” explains McCredie. “In the conversations that we are having with them, it is clearly much more pointed at commercial uses, whereas the activity we have seen thus far is much more pointed at scientific computing. This is going after big data, analytics, and large Web 2.0 datacenters.”

The initial target markets for these PowerCore processors are in banking, communications, retail, and transportation – markets where IBM has made good money selling its own Power Systems machines for the past several years. Suzhou PowerCore expects to see its Power variants in server, storage, and networking gear eventually.

Suzhou PowerCore is putting together the first chip development team that is working in conjunction with the OpenPower Consortium. It will probably not be the last such team if IBM’s licensing terms are flexible and affordable. Suzhou PowerCore is backed by Jiangsu Province and is located in the Suzhou National New & Hi-Tech Industrial Development Zone that is about 30 miles west of Shanghai. The Research Institute of Jiangsu Industrial Technology is given the task of building an ecosystem dedicated to Power software and hardware across China.

Incidentally, Suzhou PowerCore is a sister company to China Core Technology, or C*Core for short, which is a licensee of the Freescale Semiconductor M-Core and IBM PowerPC instruction sets. C*Core licensed the PowerPC instruction set from IBM in 2010, and its C8000 and C9000 chips are aimed at the same embedded markets as the ARM Cortex-A8 and Cortex-A9 designs. As of the end of 2012, C*Core had more than 40 different system-on-chip designs and had shipped more than 70 million chips for a variety of embedded applications, including digital TVs, communication gear, and auto systems.

The Power8 chip from IBM is expected sometime around the middle of this year. It has twelve cores and eight threads per core, which is 50 percent more cores than the current Power7+ chip and twice as many threads per core. Running at 4 GHz, the Power8 chip is expected to deliver roughly 2.5 times the performance of a Power7+ chip on a wide variety of commercial and technical workloads. The Power8 chip has 96 MB of L3 cache on the die and 128 MB of L4 cache implemented on its memory buffer controllers, and has 230 GB/sec of sustained memory bandwidth and 48 GB/sec of peak I/O bandwidth, which is more than twice that offered by the Power7+ chip.

It will be interesting to see what tweaks Suzhou PowerCore makes to this beast.

Not sure we are either, but Gartner has an interesting view that organizations can put the right practices in place today to build a data center to meet business needs indefinitely.  Some food for thought here from Gartner:

With the Right Practices in Place, a Data Center Built Today Could Meet Business Needs Indefinitely

STAMFORD, Conn., October 23, 2013 –

The increasing business demands on IT mean that data center managers must plan to increase their organization’s computing and storage capacity at a considerable rate in the coming years, according to Gartner, Inc. Organizations that plan well can adjust to rapid growth in computing capacity without requiring more data center floor space, cooling or power and realize a substantial competitive advantage over their rivals.

“The first mistake many data center managers make is to base their estimates on what they already have, extrapolating out future space needs according to historical growth patterns,” said David Cappuccio, research vice president at Gartner. “This seemingly logical approach is based on two flawed assumptions: that the existing floor space is already being used properly and usable space is purely horizontal.”

To ensure maximum efficiency, data center growth and capacity should be viewed in terms of computing capacity per square foot, or per kilowatt, rather than a simple measure of floor space. A fairly typical small data center of 40 server racks at 60 percent capacity, housing 520 physical servers and growing in computing capacity at 15 percent each year, would require four times as much floor space in 10 years.

“With conventional thinking and the fear of hot spots at the fore, these 40 racks, or 1,200 square feet of floor space, become nearly 5,000 square feet in just 10 years, with associated costs,” said Mr. Cappuccio. “A data center manager who rethinks his organization’s floor plans, cooling and server refreshes can house the increased computing capacity in the original floor space, and help meet growing business needs indefinitely. We will witness small data center environments with significant computing growth rates maintaining exactly the same footprint for the next 15 to 20 years.”

In this scenario, Gartner recommends upgrading the existing server base to thinner 1U (one unit) height servers or even sleeveless servers, while increasing rack capacity to 90 percent on average by using innovative floor-size designs and modern cooling methods, such as rear door heat exchanger cooling (RDHx), to mitigate concerns over hot spots. Implementing an RHDx system can also reduce the overall power consumption of a data center by more than 40 percent, since high volumes of forced air are no longer required to cool the equipment.

“An initial investment in planning time and technology refresh can pay huge dividends in the mid-to-long term for businesses anticipating a continuous growth in computing capacity needs,” said Mr. Cappuccio.

The evolution of cloud computing adoption will also provide relief for growing data center requirements and as the technology becomes more established, an increasing proportion of data center functions will migrate to specialist or hybrid cloud providers. This further increases the likelihood of an organization making use of the same data center space in the future, generating significant cost savings and competitive business advantages.

Gothenburg, Sweden – October 1, 2013: According to a new research report from the analyst firm Berg Insight, the global number of mobile network connections used for wireless machine-to-machine (M2M) communication will increase by 22 percent in 2013 to reach 164.5 million. East Asia, Western Europe and North America are the main regional markets, accounting for around 75 percent of the installed base. In the next five years, the global number of wireless M2M connections is forecasted to grow at a compound annual growth rate (CAGR) of 24.4 percent to reach 489.2 million in 2018.

The report highlights the connected enterprise and big data analytics as two of the main trends that will shape the global wireless M2M industry in 2014. “The world’s best managed corporations across all industries are in the process of mastering how connectivity can help improving the efficiency of their daily operations and the customer experience”, said Tobias Ryberg, Senior Analyst, Berg Insight. “Some of the best examples are found in the automotive industry where leading global car brands now offer a wide selection of connected applications, ranging from remote diagnostics, safety and security to LTE-powered infotainment services such as streaming music.”

Berg Insight believes that the next step in the evolution of the wireless M2M market will be an increasing focus on data analytics. “M2M applications generate enormous quantities of data about things such as vehicles, machinery or other forms of equipment and behaviours such as driving style, energy consumption or device utilisation. Big data technology enables near real-time analysis of these data sets to reveal relationships, dependencies and perform predictions of outcomes and behaviours. The right data analytics tools and the expertise on how to use them can create massive value for businesses”, said Mr. Ryberg. “Over the next 12-18 months we expect to see a series of announcements of new partnerships between mobile operators and big data technology leaders to address the vast business opportunities in this space.”

Download report brochure: The Global Wireless M2M Market

Free shipping on all orders to the USA