Cannot move file to trash, do you want to delete immediately?

When nautilus trashes something, it doesn’t want to have to move it across partitions. This is because it takes a lot longer to move between partitions, and then if you remove the partition then the trash has no place to restore to.

This isn’t a problem on drives which don’t have a seperate home partition because then nautilus isn’t sending the files to a different partition by putting them in ~/.local/share/Trash

Anywhere that is on the same partitions as your home directory is sent to ~/.local/share/Trash. This works across the entire root partition on setups which only have one partition.

On any other partition nautilus will make a .Trash-1000 folder on the root of the partition, then send all trashed files into that. This works rather well on external drives that you have full read/write access to, though it won’t work if you don’t have write permission to the root of the drive.

Because your / partition isn’t the same as your /home partition, and a .Trash-1000 doesn’t exist with write permission at the root of your system, nautilus will fail to trash files. Thus the delete key won’t work and a trash action won’t be available in the menus.

You could try using a root nautilus and deleting one file so that the /.Trash-1000 folder is created correctly, then using sudo chmod -R 777 /.Trash-1000 to give yourself permission to access a trash on the / filesystem. I cannot confirm that this will work though you could give it a try, this should be working fine

Ad-tracking has turned into people-tracking .. How advertisers became the NSA’s best friend !!

NSA-Logomag380

This week, new documents from NSA leaker Edward Snowden arrived with some troubling revelations: the NSA has been piggybacking on Google’s network, using the company’s “preferences” cookie to follow users from site to site, proving their identity before targeting them with malware. It means the agency has tapped into one of the most popular features on the web and the core of Google’s multibillion-dollar ad-targeting empire. Instead of just targeting ads and saving preferences, the infrastructure is being used to find people the NSA is interested in and silently infect their devices with malware.

What’s still unclear is whether the NSA is directly hacking Google or using some other way to track these cookies. But while the company is officially keeping quiet, the simple math of cookie tracking makes it likely that the NSA didn’t need any help from Google. Tracking cookies offers the NSA the perfect system for following suspects across the web: it’s pervasive, persistent, and for the most part, it’s still unencrypted. “It solves a bunch of tricky problems for bulk web surveillance that would otherwise be quite difficult,” says Jonathan Mayer, a fellow at Stanford’s Center for Internet and Society who worked with the Washington Post on the report. The right cookie will follow you as your phone moves from 3G to a coffee shop’s Wi-Fi network, and in many cases it’ll broadcast your unique ID in plain text.

ONCE THE NSA CONTROLS COOKIES, IT CAN USE THEM AS A FREE PASS INTO ALMOST ANY MACHINE ON THE WEB. IDENTIFYING INFORMATION SENT OVER PUBLIC TUBES WITH NO PROTECTION.

For the NSA, it’s practically made to order. If the agency can suss out a particular person’s unique cookie ID, they can watch for the ID at the cookie-delivery spot (in this case, Google) and get a full record of the person’s movements on the web. The Washington Post doesn’t describe how the agency uses those cookies to deliver malware, but many researchers have already guessed at a likely mechanism. With control of the network, the agency could be able to interject packets in place of a standard cookie, seeding your device with whatever program they want. The result would look like a cookie from Google, but actually be a malware packet disguised as a cookie, tailored to whichever site the agency knows you’re visiting. It’s still just speculation, but it gives a sense of just how powerful the cookie system is for a network-level attacker like the NSA. Once the agency controls cookies, it can use them as a free pass into almost any machine on the web.

t’s hard to guard against these attacks because encryption schemes are uniquely tricky to implement for cookies. As cryptographer Ed Felten points out, regular encryption doesn’t work in the case of unique cookie IDs. (The encoded version of a unique ID is a unique ID itself — all you’ve done is change the number.) The more permanent solution is HTTPS-based encryption, but the more complex handshake slows down load times, which scares away many trackers. The result is a lot of identifying information being sent over public tubes with little to no protection.

The problem is that Google is one of the few companies that enables HTTPS on principle, even if that makes the +1 buttons load a little slower. HTTPS is enabled in both Google’s DoubleClick ad cookies and service-based preference cookies — including the PREF cookie that’s mentioned in the new Snowden documents. If the NSA was going to be following that cookie, unlike most of the cookies floating around the web, the agency needs to negotiate at least a little bit of HTTPS. It’s certainly plausible that they found a way around it. We know from earlier leaks that the NSA has ways of getting around SSL, and it may have followed the Google cookies using similar tricks — but it seems more likely that the agency would have moved on to easier pickings, given the prevalence of unencrypted tracking-cookie networks. Ironically, Google’s good security practices are slightly incriminating here: the more secure its network, the more likely it is that the attackers were working from the inside, whether through legal compulsion or tapping private networks.

TOOLS LIKE GHOSTERY WILL SHOW DOZENS OF COOKIES FOLLOWING YOU FROM SITE TO SITE

The most likely explanation, favored by UC Berkeley cryptographer Nicholas Weaver, is a little less exciting. “I suspect it’s an old slide, written from back when Google’s cooperation wasn’t needed,” Weaver says. “But I’m not certain about it.” The Washington Post dates the slide to a presentation given in April, 2013, three months after Snowden first made contact with Greenwald, which is well after Google implemented HTTPS for its PREF cookies. Still, it could have been an outdated slide or Snowden could simply have gotten the date of the presentation wrong, although the Post has emphasized that the date was thoroughly vetted before publishing. Still, many are skeptical of Google involvement, including Mayer. “It doesn’t appear the NSA had any particular access to Google infrastructure,” Mayer says. “This was based on watching tracking cookies flow across the open web.”

The larger problem is figuring out where we go from here. Google’s PREF cookie is a powerful tool, reaching every page with a Google Search bar, Google Map, or +1 button — but it’s hardly the only cookie that could be used this way. Tools like Ghosterywill show dozens of cookies following you from site to site, whether it’s for ads, analytics, or universal log-ins like Facebook. Any one of those cookies could be used the same way: to find a single person and drop malware silently into their device. As long as one of them is unencrypted, the NSA will have an unimpeded path through, and while the companies are competing on load times rather than security, they have little incentive to switch.

Seen from that vantage, the problem isn’t Google: it’s everyone. “The quid pro quo of the behavioral advertising ecosystem stinks,” says ACLU technologist Chris Soghoian. “Our web browsers and mobile operating systems have been designed with defaults that facilitate tracking of our activities. It’s only natural that the NSA would try to harness it.” The web runs on tracking. It powers our analytics, our ads, and personalized services from Facebook to Netflix. It’s not clear what unwinding that system would even mean. Universal HTTPS would be a start (some have already proposed it), but the deeper problem is a web that’s built for speed rather than security. Most ad networks have never even considered how to guard against a network-level attacker like the NSA. Hardening those networks would be a massive undertaking, requiring new security at every level and no small amount of performance tradeoffs. Even now, after the Snowden has proved how real the threat is, it may not be a leap they’re willing to take.

Conclusion, turn off cookies to turn down the NSA’s tracking your machine .. DAMN?!!

Credit to http://www.theverge.com

Why 1000+ cores CPUs do matter these days ?!

We have come to the end of the road for clock speed improvements. As a corollary to
Moore’s Law, CPUs will double in the number of cores every 18 months. Based on this trend (since 2005), 1000-core CPUs would become commonplace before 2020. Recently,
several research projects have started prototyping or implementing 1000-core chips, including Intel’s 80-core Teraflops Research Chip (also called Polaris) and 48-core Single-Chip Cloud (SCC), CAS’s 64-core Godson-T, Tilera’s 100-core Tile-Gx, MIT’s ATAC 1000-core chips, and the recent Xilinx FPGA-based 1000-core prototype made by the Uni of Glasgow. With a rising core count, chip multiprocessors (CMPs) following the traditional bus-based cache-coherent architecture will fail to sustain scalability in power and memory latency. Targeting at a kilo-core scale, a paradigm shift in on-chip computing
towards the tile-based or tiled architecture has happened. A tile is a block comprising
compute core(s), a router and optionally some programmable on-chip memory1 for ultra-lowlatency inter-core communication. Instead of buses/crossbars, a network-on-chip (NoC), commonly a 2D mesh2, is used to interconnect the tiles. To avoid the “coherency wall” which emerges as an eventual scalability barrier3, some projects, notably Intel’s SCC and Polaris, do away with coherent caches and promote software-managed coherence via on-chip inter-core message passing instead. Eliminating hardware coherence may also lead to more energy-efficient and flexible computing.

Previous work has shown that only about 10% of the application memory references actually require cache coherence tracking. Applications can have most data RO-shared and few RW-shared; hardware coherence thus could overkill and also lead to waste of energy (possibly up to 40% of the total cache power. Scaling up to 1,000 cores sees another barrier—the “memory wall”. Over the past 40 years, memory density has doubled nearly every two years, but performance has improved just slowly (100s of CPU clock cycles per DRAM access). Besides the growing disparity of speed between CPU and off-chip memory, current memory architecture scales poorly for even 100 cores since CMPs are critically constrained by the off-chip memory bandwidth. We see only a limited number of DRAM controllers (e.g. four in SCC) are connected to the edges of the 2D mesh, which will not scale with increasing core density due to significant design impediment on the package pin density and pin bandwidth to memory devices. The reality of as many as 1,000 cores sharing a few memory controllers raises an important issue on how to uniformly spread the processor memory traffic across all the available memory ports. To mitigate the external DRAM bandwidth bottleneck, one solution is to increase the amount of on-chip cache per core so as to reduce the bandwidth demand imposed on off-chip DRAM. However, it will also reduce the number of cores that can be integrated on a die of fixed area.

Recent 3D stacked memory techniques can be employed (e.g. in Polaris) to alleviate such planar layout issues by attaching a memory controller to each router in the NoC. 3D stacking, however, makes it difficult to cool systems effectively through conventional heat sinks and fans. OS support for many-core chips also needs a radical rethink. Today’s OSes with symmetric multiprocessor (SMP) support are adapted to work on CMPs but cannot scale to high core counts (e.g. Linux 2.6 kernel’s physical page allocator does not scale beyond 8 cores under heavy load). Such SMP-based OSes also heavily rely on hardware support for cache coherence for efficient communications with kernel-space data structures and locks, which is no more in future non-cache-coherent kilo-core CMPs. In light of these problems, the design of  next-generation OSes for CMPs goes for the multikernel paradigm (evolved from microkernel design): the CMP is treated as a network of independent cores that do not internally utilize shared memory, but explicit, message-based communication. Examples are Microsoft – ETH Barellfish, MIT’s fOS and Berkeley’s Tesselation, which have been designed specifically to address many limitations of current OSes as we move into the many-core era.

All the above changes pose several implications, both opportunities and challenges, to the
upper software layers: (1) programming paradigms designed for distributed systems like message passing interface (MPI) and software distributed shared memory (SDSM), s.k.a. shared virtual memory (SVM), become useful to many-core systems; (2) but they need remodeling as now the system bottleneck lies in the off-chip memory instead of the network. So it is vital to trim the slow off-chip accesses and exploit the fast but small on-chip memory effectively.

Increase your apt cache limit

increase value APT::Cache-Limit
When the time I try to install google-perftools-dev oackage using apt-get; unfortunately I got message error in the terminal :

Reading package lists… Error!
E: Dynamic MMap ran out of room. Please increase the size of APT::Cache-Limit. Current value: 25165824. (man 5 apt.conf)
E: Error occurred while processing mixxx-libperf (NewVersion1)
E: Problem with MergeList /var/lib/apt/lists/ftp.de.debian.org_debian_dists_wheezy_main_binary-amd64_Packages
W: Unable to munmap
E: The package lists or status file could not be parsed or opened.

The solution is pretty easy to fix it is just you need to increase the value APT::Cache-Limit at the /etc/apt/apt.conf.d/70debconf.
$sudo gedit /etc/apt/apt.conf.d/70debconf
Then put this code at the end of file, save and exit.

APT::Cache-Limit "100000000";

at the below on that file and then save it. Next typing this code on your terminal

$ sudo apt-get clean && sudo apt-get update --fix-missing

and you won’t find apt cache limit again in the next time.

 

Bitcoin exchange virtual currency .. What the hake is that !!

Have you ever heard about this new virtual digital currency!! Don’t worry if you didn’t !!

Leading Bitcoin exchange Mt. Gox has released a new website that provides a simple explanation of what Bitcoin actually is. The site, Bitcoins.com, is essentially a tutorial on the virtual currency, a single unit of which last week reached the price of US $1,000 for the first time ever last week. Mt. Gox’s site walks users through the basics — what Bitcoin is, why people use it, and how it works — before leading into a step-by-step guide on how to get started. It’s certainly not a comprehensive rundown, but the site could prove useful for the uninitiated or those who struggle to understand the Bitcoin concept.

Mt. Gox also released a new one-time password (OTP) card this week as part of an effort to strengthen the security of user accounts. The OTP card, announced Wednesday, allows users to set one-time passwords for their Mt. Gox accounts, thereby preventing hackers and scammers from gaining access to compromised accounts. (Security has been an ongoing issue for Mt. Gox and other Bitcoin-related services.) And, like many other companies this week, Mt. Gox is offering a Black Friday deal: zero-percent trading fees for four days. The promotion runs from Friday morning through midnight on Monday (Tokyo time).

So many cities started to take this seriously, for example Hong Kong where i live right now as serious miners have started to build dedicated facilities for the sole purpose of Bitcoin mining. Journalist Xiaogang Cao visited one such center in Hong Kong, the “secret mining facility” of ASICMINER, reportedly located in a Kwai Chung industrial building Check out this link here to get to know the full story of those miners http://www.theverge.com/2013/12/2/5165428/bitcoin-mine-in-hong-kong-uses-jelly-to-keep-cool

Believe it or not, a black market for Bitcoin exists and so many people are taking it seriously, and last week some of EU Parliament proposed an idea why not starting considering the Bitcoin as an official currency exactly like the Euro .. who knows 🙂

Intel’s “Corner to Landing” leap

Intel has continually harkened back to their vision of offering a high degree of parallelism inside a power efficient package that could promise programmability, since the first details about the MIC architecture emerged.

With the eventual entry of the next generation Xeon Phi hitting the market in years to come with its (still unstated) high number of cores, on-package memory, ability to shape shift from co-processor to processor along the x86 continuum, many are wondering about what kind of programmatic muscle will be needed to spring from Knights Corner to Knight’s Landing.

As Intel turns its focus on the Xeon front to doubling FLOPS, boosting memory bandwidth and stitching in I/O, Intel’s technical computing lead Raj Hazra says the long-term goal is to make the full transition from multi-core to manycore via the Knights-codenamed family.

One can look at Knight’s Landing as simply a new Xeon with higher core counts since at least some of the complexities of using it as a co-processor will no longer be an issue. Unlike with the current Xeon Phi, transfers across PCIe are eliminated, memory is local and Landing acts as a true processor that can carry over the benefits of parallelism and efficiency of Phi in a processor form factor while still offering the option to use it as a coprocessor for specific highly parallel parts of a given workload. So this should make programming for one of these essentially the same as programming for Xeon—that is, in theory.

Despite the emphasis on extending programmability, make no mistake, it’s not as though parallel programming is suddenly going to become magically simple–and certainly that’s still not the case for using coprocessors, says James Reinders, Intel’s software director. However, there are some notable features that will make the transition more seamless.

When it comes to using Knight’s Landing as a coprocessor, the real benefits between Knight’s Corner and Landing will become more apparent. As it stands now, many programmers using accelerators or coprocessor use offload models on mixed (serial and highly parallel) code where they write their programs to run on the processor but with certain highly parallel bits offloaded. The advantage there is that there’s the power of the processors, which compared to accelerators/Phi are much better at serial tasks. Of course, programmers are keenly aware of Amdahl’s Law and are looking to OpenACC and OpenMP directives to address some of these problems with offloading—problems that Intel is addressing by nixing the offloading middleman.

As Reinders described, “One of the big things about Knight’s Landing in this regard is that to make it a processor we had to reduce the effects of Amdhal’s Law. Making Knight’s Landing a processor means we wanted to build a system around it where the program runs on it but it “offloads itself” in a sense—there’s no such thing as offloading to yourself; you just switch between being somewhat serial to highly parallel just like you do in a program you write for a processor today. However, Knight’s Landing is more capable of handling highly parallel workloads than any other processor today.”

The other way to program for Knight’s Landing (or its predecessor, for that matter) is to just treat it as a processor hooked together with other Xeons or Phis using MPI. Landing will support that model as both a processor or coprocessor, Reinders said. “A lot of users today are just taking their applications and using MPI instead of offloading. When you build a Knight’s Landing machine they can all run MPI and since they run a full OS you can do anything that a processor would do.”

By the way, as a side note on the OS, many users on the HPC front will likely not let the OS run wild and eat up a number of the cores (and there are definitely more than 61 on the new chips) and will also have to prevent the OS from munching into the high bandwidth memory it sees sitting nearby. It’s a matter of user-set policy for the number of cores the OS runs on and as for keeping the OS’s greedy hands off the new memory on board, there are workarounds in development around that.

With that specific OS piece in mind, however, it’s easy to see why Reinders is giddy about Landing. “You can think of Knight’s Landing exactly like it’s a Xeon with lots and lots of (but-we-still-can’t-tell-you-how-many) cores. The big difference is how good it is at highly parallel workloads. It’s a high core count Xeon. That’s how we get extreme compatibility with Knight’s Landing to make it a processor—every OS that boots on it will look at think it’s a just a Xeon on steroids; it shouldn’t look any different. But again you can set in policy to run it on one of the cores.” He expects that OEMs that supply systems will continue to keep configuring machines with these policies that favor keeping the OS contained and letting the applications have full reign on the other cores.

Among some of those refinements that will be present in Knight’s Landing are the 512-bit SIMD capabilities, which will eventually be extended across the entire Intel processor line. Currently with AVX2 and its 256 bit width users can pull 4 double precision operations (or 8 singles) from a single clock, but with the introduction of 512-bit, that performance will double for both single and double-precision. There is already 512 capability built into current Xeon Phi, but it’s only for use in the coprocessor since it hasn’t been fully synched with the full set of x86 capabilities. People using the current Phi thus don’t have the throughput possibilities or all the functionality that Intel will roll out with Knight’s Landing.

Reinders has been teaching users how to tap into Xeon Phi and as he’s introducing concepts leading up to Knight’s Landing everyone is “looking for holes in the armor.” But he argues that the ones they know about they’re working to address through the ecosystem, compilers, and in house. “The simple answer is that anyone who already programs for Knight’s Corner will find the Landing leap an easy one since there’s no new learning.”

This bodes well for Intel to take this highly parallel approach well beyond HPC applications in the future, especially if they continue to push the idea that there’s nothing “special” (i.e. difficult or accelerator-like for programmers) about it—that it’s simply a high core count processor. The beauty is that they can eventually round out their suite of processor choices so users can continually tailor these choices around their workloads and the degree of parallelism, performance and power required.

Credit to http://www.hpcwire.com