A USB device makes it easy to steal credentials from locked PCs

images-35Most users lock their computer screens when they temporarily step away from them. While this seems like a good security measure, it isn’t good enough, a researcher demonstrated this week.

Rob Fuller, principal security engineer at R5 Industries, found out that all it takes to copy an OS account password hash from a locked Windows computer is to plug in a special USB device for a few seconds. The hash can later be cracked or used directly in some network attacks.

For his attack, Fuller used a flash-drive-size computer called USB Armory that costs $155, but the same attack can be pulled off with cheaper devices, like the Hak5 LAN Turtle, which costs $50.

The device needs to masquerade as an USB-to-Ethernet LAN adapter in such a way that it becomes the primary network interface on the target computer. This shouldn’t be difficult because: 1) operating systems automatically start installing newly connected USB devices, including Ethernet cards, even when they are in a locked state and 2) they automatically configure wired or fast Ethernet cards as the default gateways.

For example, if an attacker plugs in a rogue USB-to-Gigabit-Ethernet adapter into a locked Windows laptop that normally uses a wireless connection, the adapter will get installed and will become the preferred network interface.

Furthermore, when a new network card gets installed, the OS configures it to automatically detect the network settings through the Dynamic Host Configuration Protocol (DHCP). This means that an attacker can have a rogue computer at the other end of the Ethernet cable that acts as a DHCP server. USB Armory is a computer on a stick that’s powered via USB and can run Linux, so no separate machine is required.

Once an attacker controls a target computer’s network settings via DHCP, he also controls DNS (Domain Name System) responses, can configure a rogue internet proxy through the WPAD (Web Proxy Autodiscovery) protocol and more. He essentially gains a privileged man-in-the-middle position that can be used to intercept and tamper with the computer’s network traffic.

According to Fuller, computers in a locked state still generate network traffic, allowing for the account name and hashed password to be extracted. The time it takes for a rogue USB device to capture credentials from a system using this attack is around 13 seconds, he said.

He tested the attack successfully on Windows and OS X. However, he’s still working on confirming if OS X is vulnerable by default or if it was his Mac’s particular configuration that was vulnerable.

“First off, this is dead simple and shouldn’t work, but it does,” the researcher said in a blog post. “Also, there is no possible way that I’m the first one who has identified this, but here it is.”

Depending on the Windows version installed on the computer and its configuration, the password hashes will be in NT LAN Manager (NTLM) version 2 or NTLMv1 format. NTLMv2 hashes are harder to crack, but not impossible, especially if the password is not very complex and the attacker has access to a powerful password cracking rig.

There are also some relay attacks against network services where NTLM hashes can be used directly without having to know the user’s plaintext password.

Computer Simulations Heat Up hunt for Planet Nine

images-34For a planet that hasn’t technically been discovered yet, Planet Nine is generating a lot of buzz. Astronomers have not actually found a new planet orbiting the sun, but some remote icy bodies are dropping tantalizing clues about a giant orb lurking in the fringes of the solar system.

Six hunks of ice in the debris field beyond Neptune travel on orbits that are aligned with one another, Caltech planetary scientists Konstantin Batygin and Mike Brown report (SN Online: 1/20/16). Gravitational tugs from the known planets should have twisted the orbits around by now. But computer simulations suggest the continuing alignment could be explained by the effects from a planet roughly 10 times as massive as Earth that comes no closer to the sun than about 30 billion kilometers — 200 times the distance between the sun and Earth. The results appear in the February Astronomical Journal.

Evidence for a stealth planet is scant, and finding such a world will be tough. Discovering hordes of other icy nuggets on overlapping orbits could make a stronger case for the planet and even help point to where it is on the sky. Until then, researchers are intrigued about a potential new member of the solar system but cautious about a still theoretical result.

“It’s exciting and very compelling work,” says Meg Schwamb, a planetary scientist at Academia Sinica in Taipei, Taiwan. But only six bodies lead the way to the putative planet. “Whether that’s enough is still a question.”

Hints of a hidden planet go back to 2014. Twelve bodies in the Kuiper belt, the ring of frozen fossils where Pluto lives, cross the midplane of the solar system at roughly the same time as their closest approach to the sun (SN: 11/29/14, p. 18). Some external force — such as a large planet — appears to hold them in place, reported planetary scientists Chad Trujillo, of the Gemini Observatory in Hilo, Hawaii, and Scott Sheppard, of the Carnegie Institution for Science in Washington, D.C.

This new analysis “takes the next step in trying to find this giant planet,” Sheppard says. “It makes it a much more real possibility.”

In addition to what Sheppard and Trujillo found, the long axes of six of these orbits point in roughly the same direction, Batygin and Brown report. Those orbits also lie in nearly the same plane. The probability that these alignments are just a chance occurrence is 0.007 percent.

“Imagine having pencils scattered around a desktop,” says Renu Malhotra, a planetary scientist at the University of Arizona in Tucson. “If all are pointing in the same quarter of a circle, that’s somewhat unusual.”

A hidden world might explain a couple of other oddities about the outer solar system. Dwarf planets Sedna and 2012 VP113, for example, are far removed from the known worlds (SN: 5/3/14, p. 16). Planet Nine could have put them there.

The planet would also stir up some of the denizens of the Kuiper belt into orbits that are roughly perpendicular to the rest of the solar system — a population of five known objects that Batygin was surprised to learn exists. When he and Brown compared their simulations of an agitated Kuiper belt to these bodies’ cockeyed trajectories, they found a match.“If there was one dramatic moment in the past year and a half, this was it,” Batygin says. “We didn’t really believe our own story for the longest time. But here was the strongest line of evidence.”

Given what scientists know about how the solar system formed, the proposed planet is not native to its current environment. It probably originated closer to the sun and was kicked to the hinterlands after flirtations with the current roster of giant planets.

This wouldn’t be the first time scientists were led to a new world by the odd behavior of another. Astronomer Johann Galle found Neptune in 1846 after mathematicians Urbain Le Verrier and John Couch Adams calculated that an unknown planet could be causing Uranus to speed up and slow down along its orbit.

Uranus was a more clearly defined problem, says Scott Tremaine, an astrophysicist at the Institute for Advanced Study in Princeton, N.J. Le Verrier and Adams were trying to understand why Uranus appeared to defy the law of gravity, whereas Batygin and Brown are piecing together a story of how the solar system evolves.

“When we’re talking about history rather than laws, it’s always easier to go astray,” says Tremaine.

The orbital alignments are striking, he says, and Batygin and Brown have done sensible calculations. But he worries about hunting for statistical significance after noting a possible oddity. “That can be very misleading,” he says. “The numbers that won the Powerball lottery are an unusual combo, but that doesn’t mean anything.”

In the meantime, “the hunt for Planet Nine is on,” Batygin says. Data from NASA’s WISE satellite, which spent nearly nine months making an infrared map of the sky, rule out the existence of a planet as massive as Saturn out to 4.2 trillion kilometers from the sun, and a Jupiter-like world out to three times as far. If a smaller, cooler planet is out there, it’s probably in the outer third of its orbit, which puts it against a dense background of Milky Way stars, a planetary needle in a galactic haystack. “It’s not going to be impossible,” he says. “It just makes it harder.”

The Victor Blanco telescope in Chile and Subaru telescope in Hawaii are the best facilities for undertaking the search, Schwamb says. Both have cameras that can see large swaths of sky. If scientists don’t mind waiting, the Large Synoptic Survey Telescope will come online in 2023. Currently being built in Chile, LSST will image the entire sky once every three days.

“We would be able to detect Planet Nine even if it was moving slowly,” says Lynne Jones, an astronomer and LSST scientist at the University of Washington in Seattle. “We could look for motion from month to month or over the course of a year and quickly pick it out from the background stars.”

There’s also the possibility, though remote, that a serendipitous picture of the planet already exists. Uranus, Neptune and Pluto were all seen before anyone realized they were planets, dwarf or otherwise. Most observations don’t record things as faint as Planet Nine. “But there’s lots of archival data,” Sheppard says, accumulated in observatories as astronomers gather images of stars, nebulas and galaxies. “This could be sitting there somewhere.”

Fibre optics

The basic medium of fibre optics is a hair-thin fibre that is sometimes made of plastic but most often of glass. A typical glass optical fibre has a diameter of 125 micrometres (μm), or 0.125 mm (0.005 inch). This is actually the diameter of the cladding, or outer reflecting layer. The core, or inner transmitting cylinder, may have a diameter as small as 10 μm. Through a process known as total internal reflection, light rays beamed into the fibre can propagate within the core for great distances with remarkably little attenuation, or reduction in intensity. The degree of attenuation over distance varies according to the wavelength of the light and to the composition of the fibre.
When glass fibres of core/cladding design were introduced in the early 1950s, the presence of impurities restricted their employment to the short lengths sufficient for endoscopy. In 1966, electrical engineers Charles Kao and George Hockham, working in England, suggested using fibres for telecommunication, and within two decades silica glass fibres were being produced with sufficient purity that infrared light signals could travel through them for 100 km (60 miles) or more without having to be boosted by repeaters. In 2009 Kao was awarded the Nobel Prize in Physics for his work. Plastic fibres, usually made of polymethylmethacrylate, polystyrene, or polycarbonate, are cheaper to produce and more flexible than glass fibres, but their greater attenuation of light restricts their use to much shorter links within buildings or automobiles.
Optical telecommunication is usually conducted with infrared light in the wavelength ranges of 0.8–0.9 μm or 1.3–1.6 μm—wavelengths that are efficiently generated by light-emitting diodes or semiconductor lasers and that suffer least attenuation in glass fibres. Fibrescope inspection in endoscopy or industry is conducted in the visible wavelengths, one bundle of fibres being used to illuminate the examined area with light and another bundle serving as an elongated lens for transmitting the image to the human eye or a video camera.

Pidgin – Multiple Instant Messenger Service

Available as a free of charge download from www.pidgin.im, this small program supports 17 networks including favourites AOL, MSN and Yahoo as well some lesser known ones such as Jabber and Gadu-Gadu.  Additional chat clients such as Skype and the Facebook Chat tool can be added through the use of freely available third party plugins which are easily located on the Pidgin site.

Once installed, simply select the type of account you want to add (for example a Google Talk account) along with your user name and password.  Any of your contacts from that account that are currently online will automatically appear in the buddies list and you can begin chatting straight away.  Not only does this offer the distinct advantage that you don’t need to switch between several applications if you have contact with friends on multiple networks but it also cuts down on the resource requirements of having multiple chat services running on one machine.

Along with being cross compatible with different networks, the Pidgin application is also available for many different Operating Systems; as well as the obvious Windows version, the developers have provided support for Solaris, SkyOS, Qtopia, UNIX, Linux and even the AmigaOS.

All the standard features you would come to expect such as contact organiser, custom smileys, file transfers and group chats are present.  The only slight criticism that I would have is that it doesn’t support video and voice chat however my assumption is that these protocols are difficult to integrate in to an application that has been designed to be compatible with dozens of networks and half a dozen different Operating Systems.  Hopefully this lack of functionality will be addressed in future releases.

Pidgin is completely customisable; the preferences dialog box provides an area where you can define every conceivable option including the interface, sounds, network connection, chat logging and your default availability status.  In terms of appearance you can also change the font type, size and colour, formatting along with installing new themes which change the appearance of smileys and status icons.  An additional option to install themes in order to change the actual user interface would be welcome as the default interface may be a little dull and unintuitive for some users.

Apple Mac OS X ‘Snow Leopard’

Windows 7 is due for release on the 22nd October however Apple announced on Monday that the release date of OS X ‘Snow Leopard’ had been bought forward from early September to Friday 28th August – the publication date of this column.

After covering the release candidate of Windows 7 some weeks back I will this week attempt to cover the Apple offering.  I don’t own a Mac or the developer version of Snow Leopard and, as such, the overview will be a culmination of what is known by the community so far which, owing to the usual Apple intense secrecy, is relatively little.

It would appear that Apple hasn’t concentrated so much on adding new features to the Operating System than refining the underlying code.  Installing ‘Snow Leopard’ on to a system currently operating ‘Leopard’ for example will immediately free up 6GB worth of disc space.

A great deal of this space has been saved by the OS no longer offering support for the older PowerPC processors and, as such, you will need a more modern Mac utilising an Intel processor to make the switch.  How much of the space saving has come from Apples claim that it has refined 90% of the Leopard code and how much of it is down to the removal of the PowerPC code we don’t know.

The refinements that Apple have made have also resulted in users of the developer preview edition already in circulation reporting a faster installation along with a more speedy start up and shut down sequence.  Apparently the OS also feels my punchy and responsive than previously.

With Snow Leopard, Apple has successfully made the jump to 64-bit computing; a technology that is rapidly becoming standard in both the PC and Mac world.  Practically all the bundled applications have been rewritten to take advantage of a 64-bit processor if available and this can potentially result in some pretty impressive speed increased.   As an example, the 64-bit version of the Safari web browser is claimed to be up to 50% faster than the same 32-bit version.

There are a number of minor improvements which are worthy of mention:

Power – Waking up from sleep mode is twice as fast with an additional speed increase of up to 50% when subsequently searching for a wireless network.  Improved power management also means that if you are sharing files across a network then your computer won’t disconnect all users from your drive when your computer enters sleep mode.

Quicktime – The bundled media player has been updated with a cleaner interface, easier uploads to YouTube and additional features previously only supplied in the paid for professional version.

Finder – This search facility is now responsive and includes an enhanced icon view along with more customisable search options.

Services – The services menu which allows you to make use of a specific service provided by another application installed on your hard disk will now only show you services relevant to the application you are currently using.

Stacks – Dock items that give you fast access to a folder or files are now scrollable so you can easily view all items.  Exposé is also refined so you can just click and hold an application icon in the dock in order to unshuffle all the windows for that application so you can quickly change to another one.

As a final point, it should be noted that the upgrade version of Snow Leopard is priced at just $29.99 (around £20) making it a worthwhile upgrade for all Mac users.  Although Macs are undeniably more expensive to buy and purchase accessories for, this price point looks incredibly appealing compared with the anticipated £70 price point of Windows 7 Home Premium.

Connecting your Computer to a TV

Connecting a computer to your TV is an extremely simple process, it is therefore surprising that so few people have taken the plunge.The most obvious use in my mind would be if you downloaded a film off of the Internet then rather than having to burn it to disc to play in your DVD player, you could play it directly from the computer however.There is also the added advantage that now most TV’s support high resolutions (a measure of how many pixels the screen can display) you could use it in place of a conventional monitor; your favourite game and even the Internet would look much better on a 42” widescreen!

There are a number of ways to connect a TV and computer and below are the three most popular:

S-Video (Separate Video)
This standard is supported primarily by older, non HD compliant TV’s which don’t display the kind of resolutions required of a conventional computer screen; whilst a low resolution screen is fine for TV and film pictures it is unable to provide the kind of clarity needed for operating a computer.As such, this option should only be used if you have an older TV and certainly only for watching movies.

You need to look out for a small, round, yellow socket on the back of your TV and computer.Providing you have both it’s simply a case of purchasing a standard S-Video cable (which we sell at Refresh Cartridges for £2.99) although if the socket is absent from your machine you will require an expensive signal converter box.

SVGA (Super Video Graphics Array)

Practically every computer has a standard monitor connection, as does practically every LCD or Plasma screen so this method is certainly the most popular.An SVGA socket is 15-pin, ‘D’ shaped and blue in colour and once you have confirmed they are present on both your PC and TV it is simply a case of buying a standard monitor cable which again, is available through us for £2.99.

The ease of use of this connection type varies depending on your computer; if you have a laptop then you will be able to display an image on your TV at the same time as using the inbuilt screen but unless your conventional PC includes two SVGA connectors then for the period it is connected you will be using the TV as your main monitor.

DVI (Digital Visual Interface)

This would be your ideal method of connected your PC to the TV as the DVI standard relies on a digital signal rather than the older SVGA which is an analogue system. This should result in an increase in quality as you remove the need for the PC to convert its digital signal to analogue before transmitting down the cable, just to have the TV switch it back to digital again on receipt.

As this is a fairly new standard, whether your PC will be equipped or not is dubious; you are looking for a long, white connection with 24-pins (3 up and 8 across).Provided this is present then your choice of cable will be influenced by your TV set; you will need either a DVI to DVI cable or DVI to HDMI (a small ‘D’ shaped, colourless socket approximately 14mm x 4.5mm) cable.If the Herald Express will allow me another shameless plug then I will mention that we sell either for £6.99.

Rise of the Alternative Network Provider

Incumbent telecom operators in the U.S. face a new category of competitors that play by a different set of rules. These alternative network providers aim to disrupt the traditional telecom business model by lowering access costs and improving the user experience. Their motivations differ from incumbent telcos, which focus on monetizing their connectivity solutions. Rather, these alternative network providers view access as a sunk cost necessary to drive their other initiatives, such as digital advertising and e-commerce. The stakes are high because these market dynamics will shift the balance of power, money, and landscape makeup in coming years. Only the strongest and most nimble incumbent operators will survive the coming shakeout.
There are three emerging segments of the alternative network provider space: Wi-Fi, cloud, and advertising. Each of these areas is driving increased interest by nontelecom companies in pursing telecom endeavors.
Wi-Fi is viable alternative to cellular
Wi-Fi is becoming a viable alternative to traditional cellular service, not only offering data, but also voice and text services. The prevalence of hotspots, in residential and commercial buildings as well as in public venues, is making Wi-Fi coverage nearly ubiquitous across large swaths of urban and suburban areas. A new breed of operator is emerging to capitalize on Wi-Fi, including cable operators, startups such as Republic Wireless and Internet companies such as Google. 
Wi-Fi operators pose a significant challenge to incumbent telecom operators because Wi-Fi is relatively low cost to use and the quality of service has been greatly enhanced due to innovations in handover technology and seamless authentication. In many cases, Wi-Fi is being offered for free, with the cost being subsidized by new business models, such as analytics and advertising, which is why this is so disruptive to telcos.
Wi-Fi offers the lowest-cost, highest-impact way to deliver connectivity. The ability to leverage unlicensed (free) spectrum, minimal backhaul requirements and the endemic footprint of hotspots across the U.S. makes Wi-Fi a considerable threat to cellular.
Advertising disrupts access model 
Facebook (via its Internet.org initiative) and Google have taken on the seemingly insurmountable challenge of bringing low-cost Internet access to the world population. This is no small feat, as nearly two-thirds of the world’s population lacks Internet access, particularly in emerging markets. 
Both companies are investing in and tinkering with new technologies aimed at trying to solve this problem, including using fiber, Wi-Fi and “space furniture,” such as satellites, balloons, drones and blimps, to blanket the planet with wireless coverage. Google is also pursuing becoming a mobile virtual network operator to offer its branded wireless service, piggybacking on the networks of Sprint and T-Mobile to offer service in the U.S. market. 
Facebook and Google are able to justify these endeavors to their stakeholders because they are indirectly driving growth in their core business, which is to sell digital advertising, by offering free or nearly free access. The more people using the Internet, the more opportunities there are for these companies to sell their ads. This model is highly disruptive to incumbent telcos because they are in the business of selling access. TBR believes that if companies like Facebook and Google are able to drastically reduce the cost of Internet access while still providing a “good enough” quality of service, it will render the traditional telecom business model obsolete. 
Facebook and Google are going one step further than just access, however. They are also engaged in lowering device costs (i.e., not just smartphones, but also other connectable devices such as meters, wearables and the like) and making apps more data efficient. Tackling each of these areas in unison will help make devices and connectivity affordable for the mainstream world population.
Cloud builds out network backbone
Companies in the cloud business or that rely on cloud internally, are proactively ensuring they can support their business scale and provide optimal quality of service to their customers. Amazon’s key focus is to ensure it can support the exponential growth in its Amazon Web Services (AWS) business. Relying on incumbent telcos for bandwidth, low latency and reliable connectivity is not only expensive but also a business risk. 
Therefore, Amazon is investing in its optical infrastructure to connect its data centers to better control its business, and Microsoft, Google, Facebook, IBM and Salesforce are doing the same (i.e., owning and controlling fiber links to ensure they optimize their cloud businesses). These companies are taking part in terrestrial and submarine optical projects and are involved in building out their infrastructure or leasing large portions of infrastructure from third parties to secure bandwidth. Buying dark fiber is another area of great interest to this segment of companies, as this infrastructure is built out and can be purchased inexpensively compared to the cost of deploying net-new fiber lines. 
This movement by cloud providers is pushing down traffic carriage costs for traditional operators, making it harder for them to monetize their networks. The more nontelecom companies start owning and controlling their fiber backbones, the greater the disruption to traditional telecom operators.
Competition is good for network vendors
Network vendors are benefitting from the disruption occurring in the telecom market. Not only do they have new customers to which to sell infrastructure, they also are selling more to their traditional customers as they fight to protect their core business and stay relevant. 
Webscale 2.0 companies, including Google, Facebook, Amazon and Microsoft, comprise a significant portion of key network vendor revenues. In 2014 Webscale 2.0 companies represented around 20 percent of total revenue for some key vendors, including router supplier Juniper and optical transport suppliers Ciena and Infinera, and growth is accelerating. Cisco, Alcatel-Lucent, and other network suppliers are also citing increased activity from nontelecom customers, and Wi-Fi operators are becoming key customers for a range of network suppliers, including Ericsson, Aruba, Ruckus Wireless, and Cisco.
Some customers are buying off-the-shelf products, while others are having custom-made products manufactured by a series of OEMs. Still, spend is flowing into this sector as this new segment of customers ramps up internal initiatives, resulting in opportunities to sell hardware as well as software and services. 
This fact is underscored by IT services companies jumping into the fray and supporting these customers with a range of solutions, spanning from consulting and systems integration services to network design and planning services to back-office software support systems and platforms. 
The balance of power in the telecom industry is shifting rapidly to content and Internet companies. Alternative network providers realize they need to be proactive to protect their market positions and blaze their own paths to growth. Relying on incumbent telecom operators for business-essential functions, such as providing ubiquitous Internet connectivity and 99.999 percent reliability, is a risky and costly proposition, and these companies are taking more control over the value chain to secure their destinies and ensure they can provide optimal service to their end customers.
Incumbent operators are in a precarious situation because the prevalence of alternative network providers is increasing downward pressure on access prices and will continue to shift value-added services to over-the-top players. Incumbent operators will need to accelerate their business transformation to regain their nimbleness and be able to operate profitably at lower access prices. This will require a focus on software-mediated technologies such as NFV and SDN as well as leveraging cloud and analytics to streamline their networks and make them more flexible.
Suppliers are in an enviable position because their addressable market is growing as more companies enter the telecom space. Selling network infrastructure to content and Internet companies has become a significant contributor to vendor revenues while traditional customers increase investment to remain competitive. TBR believes incumbent telecom operators will accelerate their shift to software-mediated technologies to stay competitive. This will drive a windfall for network vendors because transformation projects tend to be large in scale and take multiple years to implement

5 Cool Mouse Operations You Can Use In Windows

Here are five windows operations that you can use on some occasions with windows or associated software.

1 – Open new links in brand new tabs on Windows Internet Explorer

If your mouse has three buttons – then use the middle one to open new tabs. Hover the mouse pointer over the link and press the mouse wheel to open up new tabs.  All you need to do is place the mouse pointer over a link and then press down on the middle mouse button (the mouse wheel).

The middle mouse button is able to roll forward or back, however, it is also able to be pressed down and clicked just like a button.  If you do this on a link then it will open up that link in a new tab.  This is a lot quicker than pressing right-click and clicking on “open in a new tab.”  It is an easier way to research certain items by simply clicking in order to open new tabs.

If you are feeling the super lazy you can hold CTRL and press Tab to scroll through your tabbed windows – or you can even hold Alt and press Tab to see a screen of windows – which shows all of the items you have active at the moment, including your tabbed windows.

2 – You may find hidden menus within context menus on Windows

Some article and icon buttons on Microsoft Windows may be right-clicked on to reveal a context menu. Some icons you are able to hold shift upon in order to reveal an even bigger listed menu.  You should try it on your hard drive file.  This is a very nice little trick to use if you are a hardcore windows user.

3 – You are able to select columns of text with some Windows applications

On some applications you are able to select text on a vertical level as opposed to on a horizontal level.  You do this by holding the ALT key and then dragging the mouse across the text you would like to highlight.  This may be done on some versions of Microsoft word and many advanced editors that word have created.  You will even find that you can use this technique on the fantastic code writing software known as Notepad++.

4 – You are able to drag and drop items into some menus

When you right click on the bottom taskbar/icon bar, a contextual menu pops up. In many cases whilst this menu is open you are able to grab certain icons and add them in there. For example if you right clicking the folder icon on in the bottom left of the taskbar (next to the start menu), you will see a list of your most recent file accesses.

Click and hold onto an icon on your desktop and drag it into the open menu to pin it to the folder menu. Every time you right-click the folder you will see two lists. One list is your usage and the other is the list you created. You can use this instead of having to search through the directories on your computer to find files.

5 – Sometimes you are able to select chunks of text.

If you would like to select a big chunk of text, you can hold the CTRL button and highlight the section of text whilst avoiding the other end selected texts pieces.  This is a nice alternative to having to highlight in a horizontal direction only, because the horizontal selection process is all-inclusive and does not allow you to omit certain parts of text

Why Network Synchronization Matters

Ask anyone, and you know that most people dislike when their calls drop, or when the dreaded ‘loading’ icon disrupts their live stream of the playoff game. Carriers realize this, and know that network speed and reliability are driving forces behind consumer satisfaction in today’s connected world. With the exabytes of data sent across today’s networks, however, speed and reliability can be difficult to maintain consistently.

What’s less commonly known is the role that network timing and synchronization play in this whole equation. 4G LTE networks, for example, rely on highly accurate timing and synchronization for smooth cell-to-cell transfers of the mass of voice, video and mobile data.

4G LTE, LTE-A – What’s The Difference?

But first, some background about what 4G LTE really means. LTE is a broad umbrella encompassing three different network types:

–        Frequency Division Duplexed LTE, or FDD LTE, uses paired spectrum – one for upstream traffic and the other for downstream. FDD LTE was used in some of the early LTE deployments and is still deployed today;

–        Time-Division Duplexed LTE, or TDD LTE (also sometimes called TD-LTE), is more spectrally efficient. Unlike FDD LTE, TDD LTE requires only a single spectrum band for both upstream and downstream traffic, flexibly allocating bandwidth by timeslot, and generating significant cost savings for carriers in spectrum licensing fees; and

–        LTE-Advanced, or LTE-A, is an upgrade to either of the two types outlined above, delivering greater bandwidth by pooling multiple frequency bands and allowing simultaneous data transmission from multiple base stations to a single handset.

These different ‘flavors’ of LTE need different types of synchronization, and wireless networks use what’s called frequency synchronization and time-of-day synchronization. FDD LTE only needs frequency synchronization. TDD LTE and LTE-A, on the other hand, require both. And therein lies the challenge.

Historically, wireless networks have used global positioning satellite (GPS) as the main timing source, since it can provide both frequency and time-of-day synchronization. But carriers now recognize its drawbacks, especially as networks rely more on small cells (femtocells and picocells) for increased coverage and capacity. Often, small cells installed at street level or indoors lack a direct line of sight to GPS satellites. Even if that weren’t the case, adding GPS technology to these units would make them too expensive to deploy on a mass scale. Add to that the growing concerns about GPS spoofing and jamming, plus the unwillingness of countries outside the U.S. to depend exclusively on the U.S. government-run GPS satellite system for their wireless networks, and clearly carriers need alternatives.

Fortunately, there is an alternative: IEEE 1588 Precision Time Protocol (1588 or PTP). Not only can it deliver the frequency and time-of-day synchronization needed in TDD LTE and LTE-A networks, but it’s more cost-effective than GPS as well. Especially as carriers rely more on heterogeneous networks, or HetNets, using 1588 as a GPS-alternative for network timing becomes more critical. By definition, HetNets are comprised of both fiber and microwave equipment, including the more widespread small cells mentioned above. Compounding this is the fact that most carriers use network equipment from several vendors, which may or may not offer 1588 support.

True, most wireless customers ultimately won’t care how they get their services, just that they work when and where they’re needed. But from a network infrastructure perspective, IEEE 1588 is here to stay. Carriers need to look for it and plan accordingly as they continue their network rollouts to support next-gen advanced wireless services

Ways to Outsource Software Development and Actually Get Things Done

Developing digital assets is rapidly becoming one of the most expensive undertakings for businesses in 2016. Skilled developers command high salaries, which can add up to millions of dollars over the life of your startup. After all, it often takes a team of project managers and developers to create and maintain unique, high quality software.

This leads many businesses down the road of outsourcing development work to remote developers, often in countries like India or Ukraine. It allows businesses to maintain low costs, and can even reduce product development time by allowing for larger number of developers to be hired.

Is there a problem with this? Not really, just differences, which you can manage if you know what to expect. You can’t directly manage these remote workers in-person, so getting the work done remotely will be a different experience than with your in-house employees. This doesn’t have to be a difficult experience though! Here are three ways to have your cake and eat it too with outsourcing your software development.

Hire an Experienced Project Manager

Having an in-house project manager is a great way to maintain control over your development without bringing on expensive in-house workers. It’s up to the project manager to oversee the remote workers and keep you up to date on the progress being made. This may even be a part-time position, because it’s rare that remote workers need full time management, unless there are large numbers of people working abroad.

This means you and your team can concentrate on business, while still being completely informed of the ups and downs related to the project. Your manager can brief you daily or weekly, translating the technical jargon to English, as well as providing demonstrations as projects are fleshed out.

For those who thrive on control, this may be a great solution to reduce costs of in-house developers and reduce the risk that comes from outsourcing.

Bring on a Firm to Handle it All

If your company doesn’t want the extra overhead of a project manager, you could bring on a firm like DevTeam.Space or Toptal.com to handle the entire process.

These types of companies have their own developers, or source from a pool of hundreds of manually vetted development teams in the case of DevTeam.Space, so that you don’t have to worry about recruiting or managing developers. This solution also allows your business to scale up projects if necessary, as these firms can more easily bring in additional developers than you could in-house.

The goal of these businesses is to be a safe solution for outsourcing software, as there’s more organization and peace of mind than working with individual remote developers.

“The software outsourcing market will continue its rapid growth, serving more and more companies every year. To actively participate in this growth, every company should focus on providing a higher level of quality and communication,” says AlexeySemeney, CEO of DevTeam.Space, a firm that helps companies build software using elite remote development teams. “We provide every client with two project managers, vetted and trained senior level developers with at least 5 years of dev experience, and a reporting dashboard with daily written updates and roadblocks tracking. This not only allows us to build precise project estimates and deliver products faster, but makes our clients feel safe and in control of the situation.”

Find a Rockstar as Your Lead Developer

If you already have someone to oversee the project, a rockstar in-house developer can be an extremely valuable asset to maintain quality standards and handle the most complex parts of the development process.

Having access to someone in-house can also make getting through roadblocks easier, as well as keep you even more informed of the ups and downs associated with developing digital products, especially enterprise-level software.

Yes, you’ll end up paying this person more than even an average in-house developer; however, it’s more than worth it if this rockstar is providing the value of 2 or 3 mediocre developers. Like hiring an in-house project manager, this option is for companies looking to save money through outsourcing without sacrificing control–and potentially quality–in the process.

To find the right fit for your company, consider using a service like CyberCoders to get the most exposure for this type of position. You’ll want to make sure you get the right person in for the job, as they could make or break a project.

Windows Admins Get New Tools Against Pass-The-Hash Attacks

In Windows 2012 R2 and Windows 8.1 releases, Microsoft released a slew of new features specifically created to stop or minimize PtH attacks, which version 2 of the PtH whitepaper covers in good detail. Here’s a recap of the new Windows PtH mitigations:

  • Strengthened LSASS to prevent hash dumps
  • Many processes that once stored credentials in memory no longer do so
  • Better methods to restrict local accounts from going over the network
  • Programs are prevented from leaving credentials in memory after a user logs out
  • Allows Remote Desktop Protocol (RDP) connections without putting the user’s credentials on the remotely controlled computer
  • A new Protected Users group, with member’s credentials that can’t be used in remote PtH attacks
  • Several other OS changes that make PtH attacks far more difficult to achieve

Most of these protections are now available in all of Microsoft’s supported operating systems. If your company is worried about PtH attacks, you should implement these mitigations. Yes, hackers and malware writers are already working overtime to defeat these defenses, but enabling them can only help you and reduce risk.

But you can take many other measures involving traditional policies, procedures, and controls that don’t require the new features. If done correctly, they can provide event better protection than the mitigations listed above:

  • Prevent the bad guy from gaining local admin/domain access in the first place. This involves perfect patching, educating end-users against social engineering, and making sure end-users aren’t logged in with elevated accounts.
  • Reduce the number of privileged domain accounts to zero or as near to zero as you can get. We call this the “zero admin” model. Don’t allow permanent members of any elevated group. Instead use delegation, credential vaulting, or a PIM product.
  • Audit, alert, and respond anytime membership in an elevated group changes unexpectedly.
  • Require all elevated accounts to use two-factor authentication.
  • Require all admins to use highly secure (no Internet access, strict whitelisting control, and great patching), dedicated (per user) “jump boxes” to administrate other servers.
  • Don’t allow elevated domain accounts to log on to end-user workstations. Instead, they should be logged on using delegation or as a local Administrator.
  • Deny the ability for elevated local accounts to log on to other computers over the network (use group policy or local policy).
  • Make sure all elevated local accounts, like administrator, use unique passwords, so they can’t as readily launch attacks against other nearby computers.

Lastly, use remote management tools and methods to logon to remote computers that do not place the credentials in the remote computer’s memory. Built-in and common Windows methods include: