I just recently bought an arduino and requires gcc-avr/avrdude to compile the software. I installed avr for another microprocessor component a while ago but is obviously an outdated version (gcc version 3.3 20030512 (prerelease)) so I went ahead to update these but it didn’t work.
(Please note that before hand I also broke my aptdaemon through an incomplete wine installation (couldn’t get passed font installation agreement) but I fixed that via a re-installation then accepting the agreement.)
I am trying to update these by running bingo’s build script but the dependencies it requires cannot be installed due to avr. terminal reports
I have tried running update manager and updating my system via it but all I get is an error message
which then tells me to try running apt-get -f install which just results the same as last time.
So how can I fix my system, I really need the new avr, please 🙂 BTW, my system is Ubuntu 11.04
7 Answers 7
After you get that error, try sudo apt-get -f install to force an install of the files that didn’t get loaded because of the error.
Then try sudo apt-get update again, sudo apt-get -f install back and forth until only the package that has the error is left.
sudo dpkg –configure -a
and clean the cache
sudo apt-get clean
This usually happens as a result of ‘Unmet dependencies for installed packages’.
Here’s a simple solution if you have ‘Synaptic’ installed:
- Open Synaptic.
- Go To ‘Status’ (in the left navigation).
- Choose ‘Broken’.
- Remove these broken packages.
Otherwise it can be dealt via CLI:
Open terminal and run this command:
Above command will clean out the local repository of retrieved package file.
Will correct broken dependencies i.e. -f here stands for “fix broken”.
will configure all ( -a ) the packages which haven’t been configured yet. In the end do run the update command sudo apt-get update .
Open synaptic . Then go to status and choose Broken. Then remove completely the broken packages.
This should correct your system.
Had the same problem, an
fixed it. I hope this helps!
Try: sudo apt-get update && sudo apt-get -f install
I hope that this will resolve the issue.
If you’re not already, try changing your package repository reference to ‘Main’ or the United States. Doing this fixed my Python-dev unmet-dependencies problem (my 12.04 install was using the United Kingdom package repository, previously).
- In ‘Ubuntu Software Center’ (USC) go to the menu/tab ‘Edit => Software Sources’.
- Change the ‘Download from’ drop-down value to ‘Main Server’ or a server in the United States.
- Leave USC, the open ‘Update Manager’ from Ubuntu’s program menu, and ‘Check’ for software update (or issue ‘sudo apt-get update’ in a terminal window).
- Update your software as you normally would, e.g. via ‘Update Manager’ or apt-get/aptitude in a terminal.
This repaired my repository and I went on to install whatever I needed afterwards, as normal.
Before rectifying my problem with the above instructions, various aptitude/apt-get commands suggested that I remove many, many packages, but, as you can apreciate, I didn’t fancy loosing my 6-months+ of package additions, and that’s even though I snapshot the package list at times (see my gist for hints)! I’m really glad I found out about the instructions I’m leaving here.
Chris Hoffman is Editor-in-Chief of How-To Geek. He’s written about technology for over a decade and was a PCWorld columnist for two years. Chris has written for The New York Times and Reader’s Digest, been interviewed as a technology expert on TV stations like Miami’s NBC 6, and had his work covered by news outlets like the BBC. Since 2011, Chris has written over 2,000 articles that have been read nearly one billion times—and that’s just here at How-To Geek. Read more.
The hardest part of compiling software on Linux is locating its dependencies and installing them. Ubuntu has apt commands that automatically detect, locate and install dependencies, doing the hard work for you.
We recently covered the basics of compiling software from source on Ubuntu, so check out our original article if you’re just getting started.
Auto-apt watches and waits when you run the ./configure command through it. When ./configure tries to access a file that doesn’t exist, auto-apt puts the ./configure process on hold, installs the appropriate package and lets the ./configure process continue.
First, install auto-apt with the following command:
Once it’s installed, run the following command to download the file lists it auto-apt requires. This process will take a few minutes.
After the first command is done, run the following commands to update its databases. These commands will also take a few minutes.
sudo auto-apt updatedb && sudo auto-apt update-local
After your’e done building auto-apt’s databases, you can start the ./configure process with the following command:
If you see an error message that says a specific file is missing, you may not know the package you have to install to get the file. Apt-file lets you find the packages that contain a specific file with a single command.
First, you’ll have to install apt-file itself:
After it’s installed, run the following command to download the file lists from your configured apt repositories. These are large lists, so downloading them will take a few minutes.
Run the following command, replacing “example.pc” with a file name, and the command will tell you exactly which package you need to install:
Install the package with the standard apt-get install command:
You can also perform a file search from the Ubuntu Package Search website. Use the “Search the contents of packages” section on the page to search a specific file.
It’ll give you the same results as apt-file, and you won’t have to download any file lists.
We covered apt-get build-dep in our initial post. If an earlier version of the program you’re trying to install is already in Ubuntu’s package repositories, Ubuntu already knows the dependencies it requires.
Type the following command, replacing “package” with the name of the packge, and apt-get will install the required dependencies:
Apt-get prompts you to install all the required dependencies.
If a newer version of the program requires different dependencies, you may have to install some additional dependencies manually.
All these commands use apt-get, so you can also them on Debian, Linux Mint and any other Linux distribution that uses apt-get and .deb packages.
Compiling and installing software is a pain and problem I cannot overcome. I just want to run down through my understanding of this process with someone more knowledgeable to clear my mind to get to the next level.
Many scientific software I need are not distributed as packages. I understand “./configure” sets up the compilation variables and checking for dependencies “make” does the compilation “sudo make install” puts all the libraries and bins in their places. However it never works. I rarely get out of the a) “./configure” stage without entering dependency hell, and if I do, b) “sudo make install” will probably nuke my box.
a) The dependency hell is very frustrating. Sometimes I have the library, but it doesn’t like it. Or the library doesn’t want to install. Or “configure” can’t find it. Or my distro placed it somewhere it shouldn’t be. Or there are two versions in my system. Problem is, I can’t understand how to diagnose and therefore fix these problems. What are some good references to learn for someone who doesn’t need to become a programmer?
b) My understanding is “make install” will replace some libraries and change settings without my package manager being aware of this. Therefore, some programs won’t run, others can’t be updated. So, if I don’t use “make install”, and just keep the compiled binary in my user directory with a symbolic link added to the PATH, will I be in the clear?
My box is single user, has tons of free HD so I don’t really care about having multiple (dozens) of copies of libraries if that will solve my problems. Space is cheap.
I have simple playbook where trying to install the Debian packages downloaded locally in my server. This playbook runs on localhost and install the Debian packages in the same system. But this playbook gives an error “Dependency is not satisfiable for some of the packages even tough the dependency package is available in the local repository.
I can download all the dependencies required for the specific package in my local repository using apt-get install –download-only package_name
But in my playbook, i should have a mechanism to install the dependencies first than install the actual package. This task should be dynamic, playbook should resolve the dependencies by itself for any package install.
When the package has dependency over another package, How the playbook resolve dynamically.
Some of the options explored:
Using ordered indexed_items, using gdebi.. Looking for efficient Logic.
1 Answer 1
Any specific reason that you aren’t using your own real package repository? If you use Ansible you seem to see the benefit of automating things, so this would be a natural step. And it would enable the appropriate installation order merely based on the dependency graph and the packages available. There are other tools as well for the job, but reprepo is easy to use and seems ideal for smaller scale package repositories.
You don’t give information about the system on which this playbook fails. But the symptoms are rather clear. And none of that has anything to do with Ansible specifically.
- /home/local_repository/wireshark_3.0.5-1_amd64.deb could not be installed because it requires exactly wireshark-qt (= 3.0.5-1) (i.e. exactly that version of the wireshark-qt package)
- /home/local_repository/wireshark-qt_3.0.5-1_amd64.deb (the dependency named in the previous item) failed because its dependency libc6 (>= 2.29) was not available.
In order to figure out what exactly is going on you would have had to give information about the system. However, I can tell you how you can find out more details.
Simply use apt-cache policy
on both the dependencies ( wireshark-qt and libc ) to see which versions of the packages are at all available.
If I were to guess your problem is that you attempt to install this on a system which is older than the prerequisites required for the packages you attempt to install. And trying to forcibly install a newer libc package is a really bad idea, given its role in the system. It could leave your system unusable.
For example if you were attempting this on the current Debian 10 (i.e. currently Buster==stable), you can see in this table (contents are bound to change, so I am including the screenshot) that alone the libc6 (>= 2.29) prerequisite cannot be satisfied:
Alternatively you can look here and navigate between the current releases in the top right (at the time of this writing):
The first step to figure out whether there is a newer package available would be to look at the package search for your current release. To find out what that is, use lsb_release -a (to see all information) or lsb_release -cs to see the short codename (e.g. buster ).
Looking at the package versions you attempt to install, I’d guess you are attempting to install the eoan (19.10) package on an older Ubuntu, am I right?
You could attempt to download the sources ( apt-get source
) of the two packages in question ( wireshark and wireshark-qt ) and attempt to build it yourself. If you’re lucky you can build that new Wireshark version against whatever Qt and libc versions are included in your specific release. Arguably Qt (and its dependencies) would be more likely to be an issue than libc.
Alas, chances are that it will be easier to upgrade your system to 19.10, wait
4 months until focal aka Focal Fossa (Ubuntu 20.04) gets released or install a VM or container enabling you to run that newer Wireshark version.
But again, none of that has anything to do with Ansible, except that you used Ansible to invoke the installation commands.
I am having dependency problems, whenever I do an apt-get install , I get this error message:
I already tried:
- apt-get clean , update , upgrade , install -f
- dpkg –configure -a
What should I do now?
3 Answers 3
http://ftp.de.debian.org/debian/ sid main is a repository for the Debian OS, not Ubuntu. You should not be using this repository. Here’s what you can do:
sudo cp /etc/apt/sources.list /etc/apt/sources.list.bk
- This is to backup your sources.list file.
Open up /etc/apt/sources.list with your favorite editor, and delete everything, and repopulate it with the proper, default repositories. Here’s how you’ll get them:
- Go here: http://repogen.simplylinux.ch/
- Select your country and release.
- Select everything in the “Ubuntu Branches” box.
- Select everything in the “Ubuntu Updates” box except for the “Proposed” options.
- Select everything in the “Ubuntu Partner Repos” box.
- Select everything in the “Ubuntu Extras Repos” box.
- Scroll down to the very bottom and hit Generate List.
- Copy the output of the first box into your sources.list file and save it.
Run the following commands in order:
You’ll probably get some errors along the way. apt-get install -f should try to fix most issues, but I suspect that it won’t fix everything. dpkg will try to further configure the packages, although apt-get install -f should call it by default. The last command is to fully upgrade your system, including the linux kernel, which is what you’re having problems with from the logs you posted. I suggest you, again, run these commands after everything is done:
I am new to the computing world. My intention is to figure out a generic approach to solve cyclic dependencies while installing new software on Linux, if it exists. Here I am using the case of Google chrome to better illustrate my question. While installing Google chrome (both using package manager and apt-get) I encounter the following problem:
To solve the above error, I tried installing libappindicator1 but that returns another dependency error:
Now here we encounter the cyclic dependency. When trying to install libindicator7 the following error is received:
As you can see that I cannot install the package because of the dependencies. Now one way is to use apt-get -f install and let Linux magically do it’s work. But that won’t teach me much. Using this example (or suggest a better example), can we figure out a better approach to solve the problem of cyclic dependency? If this is a stand-alone case of cyclic dependency while installing a new software or I made a mistake in interpreting the errors then I can remove the question.
Some helpful links-
3 Answers 3
The problem is usage of dpkg to install google-chrome-stable. DPKG does now install the required dependencies and leave the system in a broken state.
This installs the package with required dependencies.
dpkg only installs a package, so doing dpkg -i packageName.deb will only install this Deb package, and will notify you of any dependencies that need to be installed, but it will not install them, and it will not configure the packageName.deb because well. the dependencies are not there.
apt is a Package Management System that handles the installation of Deb packages on Debian-based Linux distributions. A Package Management System is a set of tools that will help you install, remove, and change packages easily. So apt is like a clever dpkg
DPKG is the software at the base of the package management system in the free operating system Debian and its numerous derivatives. dpkg is used to install, remove, and provide information about .deb packages. dpkg (Debian Package) itself is a low-level tool. 
APT (for Advanced Package Tool) is a set of tools for managing Debian packages, and therefore the applications installed on your Debian system. APT makes it possible to Install applications, Remove applications, Keep your applications up to date and much more.
So if you move step by step on your installation
Once you download a .deb package you can unzip it. Unzip the contained control.tar.gz file. You would find a set of all the required packages.
Find all the dependencies for that specific Debian package. For google-chrome you would have something like
You would need to install all the dependencies for that specific package. Each dependency might depend on a set of other dependencies. You would have a tree of these dependencies. Either you can manually install all these dependencies or use something like apt or yum or aptitude .
What either of these package managers would do for you is they would construct a dependency tree for you and install all the relevant packages before installing your Debian package.
So, Ideally there should not be any loops in the dependency tree, but it might be the case that some of the existing packages are already installed and are in newer/older version of what is currently installed and is a required package for an already existing installed package. Then you can end up in cyclic dependency loop.
So, how apt handles cyclic dependencies is mentioned in , I think you can consider it as a generic algorithm for solving a dependency manually but it’s not recommended. Circular dependencies happen in the repositories, but the ones left standing to obey some specific rules. Usually, these are tightly bound packages. So the Depends relationship between them specifies the exact version number.
ส่วนที่ยากที่สุดของการคอมไพล์ซอฟต์แวร์บนลีนุกซ์นั้นคือการหาตำแหน่งที่ขึ้นต่อกันของมันและทำการติดตั้ง. Ubuntu มีคำสั่ง apt ที่ตรวจจับค้นหาและติดตั้งการพึ่งพาโดยอัตโนมัติทำงานหนักสำหรับคุณ.
เมื่อเร็ว ๆ นี้เราได้กล่าวถึงพื้นฐานของการรวบรวมซอฟต์แวร์จากแหล่งที่มาบน Ubuntu ดังนั้นตรวจสอบบทความต้นฉบับของเราหากคุณเพิ่งเริ่มต้นใช้งาน.
ตรวจสอบอัตโนมัติและรอเมื่อคุณเรียกใช้คำสั่ง. / config ผ่าน เมื่อ. / config พยายามเข้าถึงไฟล์ที่ไม่มีอยู่ auto-apt จะทำการพักกระบวนการ. / config ไว้ให้ติดตั้งแพ็คเกจที่เหมาะสมและให้กระบวนการ. / config ดำเนินต่อไป.
ก่อนอื่นให้ติดตั้ง auto-apt ด้วยคำสั่งต่อไปนี้:
sudo auto-apt updatedb && sudo auto-apt update-local
หลังจากที่คุณสร้างฐานข้อมูลอัตโนมัติแล้วคุณสามารถเริ่มต้นกระบวนการ. / config ด้วยคำสั่งต่อไปนี้:
หากคุณเห็นข้อความแสดงข้อผิดพลาดที่ระบุว่าไฟล์ใดไฟล์หนึ่งขาดหายไปคุณอาจไม่ทราบแพ็คเกจที่คุณต้องติดตั้งเพื่อรับไฟล์ ไฟล์ Apt ช่วยให้คุณค้นหาแพ็คเกจที่มีไฟล์เฉพาะด้วยคำสั่งเดียว.
ก่อนอื่นคุณจะต้องติดตั้ง apt-file เอง:
หลังจากติดตั้งแล้วให้รันคำสั่งต่อไปนี้เพื่อดาวน์โหลดรายการไฟล์จากที่เก็บ apt ที่คุณกำหนดไว้ รายการเหล่านี้มีขนาดใหญ่ดังนั้นการดาวน์โหลดจะใช้เวลาสักครู่.
ติดตั้งแพ็กเกจด้วยคำสั่ง apt-get install มาตรฐาน:
คุณสามารถทำการค้นหาไฟล์ได้จากเว็บไซต์ Ubuntu Package Search ใช้ “ค้นหาเนื้อหาของแพ็คเกจ” ส่วนบนหน้าเพื่อค้นหาไฟล์เฉพาะ.
มันจะให้ผลลัพธ์เช่นเดียวกับ apt-file และคุณไม่ต้องดาวน์โหลดรายการไฟล์ใด ๆ.
เราครอบคลุมการสร้าง-Dep-Dept ในโพสต์เริ่มต้นของเรา หากเวอร์ชันก่อนหน้าของโปรแกรมที่คุณพยายามติดตั้งนั้นมีอยู่ในที่เก็บแพคเกจของ Ubuntu แล้ว Ubuntu ก็รู้ถึงการพึ่งพาที่มันต้องการ.
พิมพ์คำสั่งต่อไปนี้แทนที่“บรรจุภัณฑ์” ด้วยชื่อของ packge และ apt-get จะติดตั้งการพึ่งพาที่จำเป็น:
คำสั่งทั้งหมดเหล่านี้ใช้ apt-get ดังนั้นคุณจึงสามารถใช้มันบน Debian, Linux Mint และการกระจาย Linux อื่น ๆ ที่ใช้แพ็คเกจ apt-get และ. deb.
I was using this script to install basic software, but had to interrupt because of slow internet speed. Now when I hit $ sudo apt-get install npm , I get following error
13 Answers 13
If sudo apt-get install -f
doesn’t work, try aptitude:
Aptitude will try to resolve the problem.
As an example, in my case, I still receive some error when try to install libcurl4-openssl-dev :
So i try aptitude, it turns out I have to downgrade some packages.
First of all try this
If error still persists then do this
Afterwards try this again:
But if it still couldn’t resolve issues check for the dependencies using sudo dpkg –configure -a and remove them one-by-one . Let’s say dependencies are on npm then go for this ,
Then go to /etc/apt/sources.list.d and remove any node list if you have. Then do a
Then check for the dependencies problem again using sudo dpkg –configure -a and if it’s all clear then you are done . Later on install npm again using this
Then install the Node.js package.
The answer above will work for general cases also(for dependencies on other packages like django ,etc) just after first two processes use the same process for the package you are facing dependency with.
The command to have Ubuntu fix unmet dependencies and broken packages is
from the man page:
-f, –fix-broken Fix; attempt to correct a system with broken dependencies in place. This option, when used with install/remove, can omit any packages to permit APT to deduce a likely solution. If packages are specified, these have to completely correct the problem. The option is sometimes necessary when running APT for the first time; APT itself does not allow broken package dependencies to exist on a system. It is possible that a system’s dependency structure can be so corrupt as to require manual intervention (which usually means using dselect(1) or dpkg –remove to eliminate some of the offending packages)
Ubuntu will try to fix itself when you run the command. When it completes, you can test if it worked by running the command again, and you should receive output similar to:
Reading package lists. Done Building dependency tree Reading state information. Done 0 upgraded, 0 newly installed, 0 to remove and 2 not upgraded.
- Page History
- Login to edit
Installing is not always straight forward. This guide should be if the program you are tying to install cannot be found using Add/Remove found under the Applications menu. A program may require additional packages from the repositories or libraries to be compiled as well. Installing a package from the repositories is the easiest and fastest method of installation.
In Ubuntu there are several standard ways to install software. The most common is to use Synaptic Package Manager, or the command line tools “apt-get”, or “dpkg”. These tools download .deb files from repositories and install them. Although not all packages are visible after the install, you may need to enable extra repositories to see all available packages. To learn more about these tools see SynapticHowto, AptGetHowto, and AddingRepositoriesHowto.
Users can also install .deb files manually. Once the package has been downloaded, clicking on the .deb file will open the Package Installer. The Package installer will let you know if there are any packages that must be installed before the current one can be.
In the above picture, the pidgin package will not install because the dependency libpurple0 is not installed. Once dependencies are installed, close and reopen Package Installer. You will then be able to install the package. Pidgin is only used as an example, Pidgin can be installed though Synaptic.
Most times, when software that is not packaged as a .deb file will be distributed as a tar file. Common tar file types include .tar, .tar.gz, .tar.bz, and .tar.bz2. Each of these is handled in the same way. Double click the downloaded file, Archive Manager will open as shown below:
Once archive manager is open the files can be extracted by clicking the Extract button. A dialog box will appear asking where you would like to extract your file(s) to. The best place to extract file would be a folder (or directory) in the home folder set up for installing programs.
Important: after untaring the file check to see if there is a .bin or .sh file. These files are typically installers and will install programs without the user having to follow the next steps. To get these files to run you may need to change the permissions, to do this right click on the file and select Properties. In the Permissions tab, check the run as executable box. Then double click the file again to run it.
Once you have untared the files for the program, you need to compile, or translate the raw code into a form that your computer can read. You can compile software on Ubuntu or any other GNU/Linux distro. Before you proceed you need to install the build-essential package. This contains everything required for compiling on any Ubuntu version to work. It is not part of the default install. To install either use synaptic, or via the command line apt-get.
After build-essential has been installed, you will need to change the working directory of the terminal to the directory that contains the untared files.
For example, if the files are in /home/user/source, you would use the command
Once the working directory has been changed to the directory where the files have are, you will be able to compile the program. To do this you will need to run two commands:
This command will create the make file. This command will also notify you of any dependencies that need to be resolved. To resolve any dependencies, search for the packages in Synaptic Package Manager. When you find the package make sure you install both the package and any -dev packages with the same name. Note: Not all program will have a ./configure script.
Once the Configure script has done its work, you can run the make file. The make file does the actual compiling of the program.
Now that you have downloaded, untared, and compiled the program, you can install the program. To install the program:
Because installing using make install can make programs hard to install, you can use checkinstall to make the process a bit easier. First you will need to install the checkinstall and gettext packages:
After checkinstall has been installed, it can be used by replacing the above make install command with:
To uninstall a program that has not been installed using apt-get or a similar installer, you must again change the working directory to where the program was installed:
Once you have changed the working directory, you can run:
This command may not work if the developer has not included them in the program.
Not all Linux distributions use the same style of packages to install software. Debian based distributions use .deb packages, while Red Hat based distributions use .rpm packages. A tool, Alien, does exists to convert .rpm packages to .deb packages for use in Ubuntu, but it does not always work. Alien is available in the repositories and can be installed using apt-get:
The package can then be installed using this command:
OtherWaysToInstall (последним исправлял пользователь fooka 2009-04-30 03:35:02)
by mike-d · Published November 9, 2007 · Updated January 5, 2009
lets now see how would you be installing source files in ubuntu.
source files contain the programs and hence before the installation you need to compile them. so you need to install the build-essentials from the synaptic package manager. else this build-essentials is already present in the cd.. and so you can install it. else you can install it typing it in the terminal by
sudo aptitude install build-essential
suppose you have a source file name src.tar.gz, what you do initially is that you need to extract the source files and then in the terminal.
navigate to the folder where the source file is extracted using the cd commands. and then
type the following.
sudo make install
lets see what each one of them does.
./configure. checks whether the required dependencies are available on your system or not. if not an error is reported.
make compiles the source code and make install is used to install the program in to the location
if it asks for an installation location it is recommended to install all the source to /usr/src
clean install removes any temporary files created in the installation process of the source
and thats it your source file in installed in your system.
- What package is that file in ? (1)
- Ubuntu Edgy Eft complete sources.list (repository list file) (3)
- Slow Update manager how to fix? (14)
- Package Installation error and solution (52)
- List of Ubuntu Based Linux Distributions and Live Cd’s (0)
- Installing LAMP Server Using TASKEL (desktop edition) (9)
- How to install/use packages in UBUNTU 7.04 DVD? (7)
VirtualBox 4.3.20 released and ubuntu installation instructions included
December 5, 2014
by ruchi · Published December 5, 2014 · Last modified December 2, 2014
New Unity release ready for testing in Ubuntu 10.10/10.04
by ruchi · Published June 12, 2010
Openshot 1.4.3 released and PPA installation instructions included
October 15, 2012
by ruchi · Published October 15, 2012 · Last modified October 9, 2012
- Comments 11
- Pingbacks 0
Tried that with some packages, it works great. Finally my cpu has to do something.
But how can I get rid of an installed source package? “make uninstall” doesn’t work, neither does “apt-get remove”. It seems this removes only the sources but not the programm.
Do I really have to reinstall linux and everything from scratch?
i have this problem while configure in firefox folder, wht to do
[email protected]:/home/kmmr/Desktop/me/firefox# ./configure
bash: ./configure: No such file or directory
hai sir i am intrest to know the source code of ubuntu os please send the responce to this
When I type clean install I get the error:
No command ‘clean’ found, did you mean:
Command ‘uclean’ from package ‘svn-buildpackage’ (universe)
Command ‘clear’ from package ‘ncurses-bin’ (main)
Thanks in advance,
i got an error make: ***No rule to make target ‘install’. stop.
I downloaded vlc media player 2.0.1 in tar.xz format.
Extracted the files
changed the location in terminal to extracted locaion.
When i type the command ./configure
it showed me the error
configure: error: No package ‘dbus-1’ found.
was downloaded from ralink to install a wifi adapter into linux and these were the drivers.. after the read me, the makefile.. f**k Trying to compile source code in .b2z files!! were is our “automatic compiling of .b2z folders” [installer software application] automation that will turn them into .deb. i dont understand terminal luanguage. The .EXE was there to make it EASY!! [[automatic compiling of .b2z folders, will become automation Software building application]] any sorce .b2z that you get, should automatically be compiled with this new Compiling Software… What the F**K. *Face, Palm..
It should be “make clean” I just tried on ubuntu 12.04 version and it’s worked.
I feel your pain. If you look at the original date of the how-to then our dates you see that not much has changed. I still laugh when the original explanation STILL seems completely acceptable to these folk 5 YEARS LATER. It takes LONGER to terminal, make, install, clean install, blah-blah, than to just double-click and install. Try to convince the powers that be and it’s like pulling teeth. “Look at all the power you have at terminal” they say. Meanwhile, 5 out of 6 comments here are STILL trying to INSTALL their file/program instead of using it and getting back to work? It’s here that they turn a blind eye to the obvious. Look at ANDROID os. When the linux world WANTS you to use it. as in competing with apple, you are shot to the front of the line; a fingers touch away from bliss. In fact, you HAVE to root your device JUST to get terminal.
The reason why it is acceptable is probably because it WORKS, I used windows(yuk) for 16 years and after switching to linux i literally felt that my life had been a waste up until then… Sure you have to do a bit more, Sure it can be a bit harder, But once you do it, it becomes much easier than Windoze or Android or anything else could hope to be. Also learning this stuff helps you in the future, And I use my programs much more than I watch the installation of them. Also using your computer isn’t supposed to be just to get a job done, It should be FUN. I’m loving Linux right now(Arch Linux ftw ^_^) and i’d probably DIE before i switched back to Windoze or Mac or anything else(With the exception of BSD) Also Android IS linux… And there’s plenty of linux distro’s that are just as EASY as android… and 4x as good. Even Arch linux has distro’s that are easy enough that even a n00b can get it nowadays. And thing’s like ubuntu are twice as easy to get/install as windoze and a million times as good.
You guys need to make things much easier and explain installation process as if you were teaching a child how to use a computer for the very first time. Most new comers, haver never in there lives used a command line. You have to hold their hand through the entire installation process.
What are dependencies?
Dependencies are files or components in the form of software packages essential for a program to run properly. This is the case with Linux overall – all software depends on other pieces of code or software to function correctly. So, this sort of “sectional” approach is where dependencies originate from. They are additional but essential pieces of code that are crucial to making programs work. This also explains why we get dependency errors during program installations as the programs being installed depend on other, missing code.
What is APT?
In the domain of Linux and, more specifically, Ubuntu, APT is short for Advanced Package Tool. It is the primary user interface that comes equipped with libraries of programs pertinent to software package management in Linux distributions such as Ubuntu and Debian.
Then comes the apt command, which is the most common way of interfacing with the Advanced Package Tool. Ubuntu users use apt to install new software, update and upgrade not only existing packages but also the entire operating system. This is what makes apt a very powerful and commonly used command in Ubuntu. Furthermore, the abilities of the apt command are not limited to just installing software packages, as it also plays a very important role in handling dependencies.
When downloading dependencies, we use the apt-get command. The primary function of apt-get is to obtain software packages and information from their respective repositories. The sources of these packages are authenticated and secure. The same procedure works for updating and removing dependencies.
Now, let us finally get into using the apt-get command and start installing dependencies. But before that, it is important to learn what the syntax of this command is.
The syntax described above is the most commonly used ones; however, there are some other ways to call this command.
Another method to use apt-get is as follows.
With that being said, you should now have a good general understanding of how apt-get works and how you can use it to install dependencies. The next step is to start looking at practical instances of its usage to see how we can use different command variants to manipulate dependencies.
Let us suppose that you want to install Python on your Ubuntu system. The first thing you would need before you install Python is a dependency known as libpython2.7-minimal. So, you can run the command below to get it.
(You may need to enter Ubuntu as root, so run $ sudo -i)
The output shows that the required package has been retrieved, extracted, and configured. We also get the amount of storage space the package is consuming. If any missing packages are remaining, we can simply run the command below to install those as well.
Now that all the dependencies are taken care of, we can install Python with the traditional command as follows.
That pretty much covers how you can install dependencies in Ubuntu; however, there are other ways you can manipulate them as well. We will cover these in the next section.
Let’s say, for instance, you wish to remove the dependency we just installed. You can do that by executing the following command.
You can run an apt command to update all the packages on your system. This is generally considered good, precautionary practice before proceeding with regular processes. It makes sure that all of your dependencies are met and updated.
Next, we will see how one can list all the packages on their system by running an apt command. The output of this command will display to us a long list of software packages that are available for installation.
However, you may want to install a specific package but not know which other dependencies need to be installed for it to work. Ubuntu fixes this issue through the showpkg flag. Run the command below to find out which dependencies are required.
Here, libslang2 is the initial package we wanted to install. In short, we can use the showpkg command to obtain more information on the dependencies we need for a certain package.
As we mentioned earlier, all the packages we install consume disk space, whether additional dependencies or the main programs themselves. Therefore, due to excessive dependencies, our computer can get cluttered. But worry not, as Linux has us covered in that department as well. You can simply run the commands given below to “clean” your dependencies.
In CentOS, the same operation is performed by the commands yum clean or yum cleanall. The clean flag clears all .deb files from the repository in var/cache/except for lock files. However, the autoclean flag also clears all the .deb files from the repository as mentioned above, but only the ones that have gone obsolete. These software packages are not available for download anymore.
In this article, we went into great detail about how one can install dependencies through apt. We first learned how dependencies work and why they are needed. Later on, we saw how one could install them and further manipulate them through other commands.
About the author
Hi there! I’m a Software Engineer who loves to write about tech. You can reach out to me on LinkedIn.
Applies to: Visual Studio Visual Studio for Mac
When building a solution that contains multiple projects, it can be necessary to build certain projects first, to generate code used by other projects. When a project consumes executable code generated by another project, the project that generates the code is referred to as a project dependency of the project that consumes the code. Such dependency relationships can be defined in the Project Dependencies dialog box.
A project dependency is automatically created when you add a project-to-project reference from one project to another project. Before you perform these steps, consider if you should instead create a project-to-project reference, which in addition to creating a dependency relationship between the projects, also creates a reference that you can use to build code that uses classes, interfaces, and other code entities from the other project. See Managing references in a project.
To assign dependencies to projects
In Solution Explorer, select a project.
On the Project menu, choose Project Dependencies.
The Project Dependencies dialog box opens.
On the Dependencies tab, select a project from the Project drop-down menu.
In the Depends on field, select the check box of any other project that must build before this project does.
Your solution must consist of more than one project before you can create project dependencies.
To remove dependencies from projects
In Solution Explorer, select a project.
On the Project menu, choose Project Dependencies.
The Project Dependencies dialog box opens.
On the Dependencies tab, select a project from the Project drop-down menu.
In the Depends on field, clear the check boxes beside any other projects that are no longer dependencies of this project.
To view the build order
From the Project Dependencies dialog, you can switch to the Build order tab to the view the build order for the solution.
To view the build order in a solution at any time, right-click on the solution node and choose Project build order.
You can use the Build order tab to view the order that projects will be built, but you can’t directly change the order from this tab.
The order you see listed is the desired logical build order, but in practice, Visual Studio further optimizes the build process by building multiple projects in parallel. However, as long as you’ve specified the project dependencies, any dependent projects will not start building until after their dependencies have completed.
While there are thousands of packages in the Ubuntu archive, there are still a lot nobody has gotten to yet. If there is an exciting new piece of software that you feel needs wider exposure, maybe you want to try your hand at creating a package for Ubuntu or a PPA. This guide will take you through the steps of packaging new software.
You will want to read the Getting Set Up article first in order to prepare your development environment.
4.1. Checking the ProgramВ¶
The first stage in packaging is to get the released tar from upstream (we call the authors of applications “upstream”) and check that it compiles and runs.
This guide will take you through packaging a simple application called GNU Hello which has been posted on GNU.org.
Download GNU Hello:
Now uncompress it:
This application uses the autoconf build system so we want to run ./configure to prepare for compilation.
This will check for the required build dependencies. As hello is a simple example, build-essential should provide everything we need. For more complex programs, the command will fail if you do not have the needed libraries and development files. Install the needed packages and repeat until the command runs successfully.:
Now you can compile the source:
If compilation completes successfully you can install and run the program:
4.2. Starting a PackageВ¶
bzr-builddeb includes a plugin to create a new package from a template. The plugin is a wrapper around the dh_make command. Run the command providing the package name, version number, and path to the upstream tarball:
When it asks what type of package type s for single binary. This will import the code into a branch and add the debian/ packaging directory. Have a look at the contents. Most of the files it adds are only needed for specialist packages (such as Emacs modules) so you can start by removing the optional example files:
You should now customise each of the files.
In debian/changelog change the version number to an Ubuntu version: 2.10-0ubuntu1 (upstream version 2.10, Debian version 0, Ubuntu version 1). Also change unstable to the current development Ubuntu release such as trusty .
Much of the package building work is done by a series of scripts called debhelper . The exact behaviour of debhelper changes with new major versions, the compat file instructs debhelper which version to act as. You will generally want to set this to the most recent version which is 9 .
control contains all the metadata of the package. The first paragraph describes the source package. The second and following paragraphs describe the binary packages to be built. We will need to add the packages needed to compile the application to Build-Depends: . For hello , make sure that it includes at least:
You will also need to fill in a description of the program in the Description: field.
docs contains any upstream documentation files you think should be included in the final package.
README.source and README.Debian are only needed if your package has any non-standard features, we don’t so you can delete them.
source/format can be left as is, this describes the version format of the source package and should be 3.0 (quilt) .
rules is the most complex file. This is a Makefile which compiles the code and turns it into a binary package. Fortunately most of the work is automatically done these days by debhelper 7 so the universal % Makefile target just runs the dh script which will run everything needed.
All of these file are explained in more detail in the overview of the debian directory article.
Finally commit the code to your packaging branch:
4.3. Building the packageВ¶
Now we need to check that our packaging successfully compiles the package and builds the .deb binary package:
bzr builddeb is a command to build the package in its current location. The -us -uc tell it there is no need to GPG sign the package. The result will be placed in .. .
You can view the contents of the package with:
Install the package and check it works (later you will be able to uninstall it using sudo apt-get remove hello if you want):
You can also install all packages at once using:
4.4. Next StepsВ¶
Even if it builds the .deb binary package, your packaging may have bugs. Many errors can be automatically detected by our tool lintian which can be run on the source .dsc metadata file, .deb binary packages or .changes file:
To see verbose description of the problems use –info lintian flag or lintian-info command.
For Python packages, there is also a lintian4python tool that provides some additional lintian checks.
After making a fix to the packaging you can rebuild using -nc “no clean” without having to build from scratch:
Having checked that the package builds locally you should ensure it builds on a clean system using pbuilder . Since we are going to upload to a PPA (Personal Package Archive) shortly, this upload will need to be signed to allow Launchpad to verify that the upload comes from you (you can tell the upload will be signed because the -us and -uc flags are not passed to bzr builddeb like they were before). For signing to work you need to have set up GPG. If you haven’t set up pbuilder-dist or GPG yet, do so now:
When you are happy with your package you will want others to review it. You can upload the branch to Launchpad for review:
Uploading it to a PPA will ensure it builds and give an easy way for you and others to test the binary packages. You will need to set up a PPA in Launchpad and then upload with dput :
You can ask for reviews in #ubuntu-motu IRC channel, or on the MOTU mailing list. There might also be a more specific team you could ask such as the GNU team for more specific questions.
4.5. Submitting for inclusionВ¶
There are a number of paths that a package can take to enter Ubuntu. In most cases, going through Debian first can be the best path. This way ensures that your package will reach the largest number of users as it will be available in not just Debian and Ubuntu but all of their derivatives as well. Here are some useful links for submitting new packages to Debian:
- Debian Mentors FAQ – debian-mentors is for the mentoring of new and prospective Debian Developers. It is where you can find a sponsor to upload your package to the archive.
- Work-Needing and Prospective Packages – Information on how to file “Intent to Package” and “Request for Package” bugs as well as list of open ITPs and RFPs.
- Debian Developer’s Reference, 5.1. New packages – The entire document is invaluable for both Ubuntu and Debian packagers. This section documents processes for submitting new packages.
In some cases, it might make sense to go directly into Ubuntu first. For instance, Debian might be in a freeze making it unlikely that your package will make it into Ubuntu in time for the next release. This process is documented on the “New Packages” section of the Ubuntu wiki.
This guide will explain what the build-essential meta-package is and what it includes when installed on your Ubuntu system.
build-essential is what is called a meta-package. It in itself does not install anything. Instead, it is a link to several other packages that will be installed as dependencies.
In the case of the build-essential meta-package, it will install everything required for compiling basic software written in C and C++.
On Ubuntu, this meta-package includes five individual packages that are crucial to compiling software.
- gcc – This tool is the GNU compiler for the C Programming language.
- g++ – This package is the GNU compiler for the C++ programming language.
- libc6-dev – This is the GNU C library. This package contains the development libraries and header files used to compile simple C and C++ scripts.
- make – This is a useful utility that is used for directing the compilation of programs. The make tool interprets a file called a “ makefile ” that directs the compiler how to work.
- dpkg-dev – We can use this package to unpack, build and upload Debian source packages. This utility is useful if you want to package your software for Debian based system.
Basically, by installing the build-essential package, you give yourself everything you need to compile basic C and C++ software on Ubuntu.
You could install each of these packages individually if you wanted to. However, the build-essential meta-package makes it simple to get everything you need with a single package
While build-essential provides a good starting point on Ubuntu, you may need to install additional libraries to compile more complicated software.
Installing build-essential on Ubuntu
The build-essential meta-package is available directly from the official Ubuntu repository making it a straightforward installation process.
For the following steps, you will need to be using the terminal on your Ubuntu device. You can open the terminal easily by pressing CTRL + ALT + T .
Alternatively, you can use SSH to interact with your Ubuntu device remotely.
1. Before we can install the build-essential package on Ubuntu, we should first run an update.
Running an update ensures that the package list we have is pointing to the latest packages.
The package list is updated from the repositories listed within the sources file and its subdirectories.
On a new installation of Ubuntu, this will only be reading from the official package repositories managed by Canonical.
2. We can easily install the build-essential package using apt by running the command below.
By running this command, the apt package manager will look for build-essential within the package list.
Once found, it will check to see what dependencies the package requires. In this case, apt will be installing the gcc , g++ , libc6-dev , dpkg-dev , and make packages.
Verifying that build-essential is Installed
Verifying that we installed the build-essential meta-package to your Ubuntu device is a relatively straightforward task.
All we need to do is get the “ gcc ” and “ g++ ” compilers to output their versions.
Doing this will indicate to us that both packages have been installed successfully.
1. Let us start by checking the version of gcc by running the command below.
gcc is the GNU compiler for the C programming language.
Below is what you should get from running this command on your Ubuntu system.
The version numbers will differ slightly depending on what version of Ubuntu you are running. For example, we are running Ubuntu 20.04.
2. While we are at it, we can also check what version of g++ got installed by using the following command.
g++ is a compiler much like gcc but is used to compile software written in C++.
After running this command, you should see a message similar to what we have below.
The version number displayed below will, of course, be different depending on what version of Ubuntu you are using.
Compiling your First C Program
Our next step will be to write a simple C program that prints a single line of text to the command line.
In this case, we will be showing how the gcc compiler works from the build-essential package.
1. Let us begin by writing a small C program.
You can start writing this script by using the nano text editor on your Ubuntu device.
2. Within this file, enter the following lines of code.
This code is incredibly straightforward as its whole purpose is to print some text to the terminal.
We start by including the standard input-output library header ( stdio.h ). This library contains the IO functionality we need for talking with the command line.
We then have the “ main() ” function. This function is called whenever the C program is run.
Within this function, we make a simple call to “ printf() ” that will print the text “ Hello World ” to the terminal. We use “ \n ” to add a new line at the end of the text.
3. Once you have entered the code into the file, you can now save it.
In nano, you can save and quit by pressing CTRL + X , then Y , and finally ENTER .
4. With our little script written, we can now compile it into a program.
As this program was written in C, we will be using gcc to compile it. This package was installed on our Ubuntu system as a part of the build-essential meta-package.
To compile our script, we need to run the following command on our system.
Using this command, we use gcc to compile the script we wrote called “ helloworld.c “.
We use the “ -o ” option to tell the compiler to save the compiled version of the script as “ helloworld “.
5. Your device should compile the script almost instantly.
Once compiled, we can try running it to verify that everything worked correctly.
From this, you should see the text “ Hello World ” appear in your command line.
You should now know what the build-essential package is and what gets installed alongside it on Ubuntu.
The meta-package contains everything you need to compile the most basic C and C++ scripts.
During this guide, you will also have gotten a chance to test one of the compilers the build-essential package installs.
If you are still unsure what exactly the build-essential package is or how to install it on Ubuntu, please leave a comment below.
Extensions to the MultiarchSpec necessary for automated cross-building and
Many people want cross-compiling to work easily on Debian systems. This requires the availability of cross-toolchains, and the ability to install cross-dependencies before building a package. Both of these things can be provided by minor developments of the base MultiarchSpec. We do already have methods to achieve these things (buildcross, dpkg-cross, cross-toolchain-base), but in the case of cross-toolchains there is a lot of complexity and unnecessary rebuilding, and in the case of build-dependencies there are limitations and unreliability. Using multiarch will result in cleaner packaging and more reliable build-dep installation mechanisms than pre-multiarch methods.
Building cross-compilers in the archive, with autobuilders, requires cross-architecture build dependencies to be specifiable, or to have a second compiler-supplied copy of the host-arch libc, libsdtc++ and libgcc1 libraries (Ubuntu cross compilers are built the latter way).
Installing cross-dependencies benefits from library-dev packages being multiarchified as well as the library packages so that both the HOST and BUILD architectures can be installed together. Many packages can be cross-built without this, but some require both architectures of the same library to be installed together, and it is much more convenient for developers if installing the HOST arch version of a library does not remove the BUILD arch version of it and vice versa.
Prior to multiarch, only packages for the machine architecture can be installed, so dpkg-cross is used to convert library and -dev packages to be of arch all, and move library and header files to a co-installable path under /usr/
The binutils package can be told to build a cross variant by passing a special environment variable (TARGET=). As there are no dependencies on target packages, this works for any GNU triplet supported by upstream, and can be automated easily.
The gcc package has a mechanism in place that rewrites various files in debian/ so that a set of cross compiler packages is built. The rewritten control file then declares build dependencies on several packages whose names end in a dash, the target Debian triplet and “-cross”. These must be generated with the “dpkg-cross” tool from target packages (Note: This is a restriction only found in Debian and emdebian, in Ubuntu the dpkg-cross use is only needed for building the eglibc cross packages). Automated bootstrap of new architectures is not possible this way, as a C library package compiled for the target is necessary to start building the packaged compiler.
Cross-gcc and cross-toolchain-base packages have been provided for Ubuntu which automate the process of doing a cross-toolchain bootstrap build, by utilizing the “binutils-source”, “gcc-source”, “glibc-source”/”uclibc-source”, “kernel-source” and “gdb-source” binary packages together with a framework package containing build scripts and several small helper packages that can be fed to autobuilders. See MultiarchCrossCompilers for details of the build processes.
Individual packages can be cross-compiled by passing a “-a” option to dpkg-buildpackage, which presets the environment variables accordingly. The package is responsible for honouring these variables, which for autotools-using packages can be achieved by passing –build and –host parameters to the configure script. The package’s build dependencies need to be split into build and host dependencies and the easiest way to do this is to re-use the multiarch specification with some minor extensions.
What is required in package dependencies is for the depending source package to distinguish build-dependencies which are satisfiable by any architecture (‘tools’) from build-dependencies which can only be satisfied by packages of the same architecture (‘libraries’ generally). This is very similar to the Multi-Arch: field options ‘foreign’ (for tools) and ‘same’ (for libraries), respectively. However it’s not exactly the same because the architecture relationship is defined by the depending package, not the depended-on package, because only the depending-on package knows what it needs the build-dependecny for. This is recognised in the multiarch spec with the Multi-Arch: option ‘allowed’ and the Depends: package:any syntax.
Despite the relationship being ‘from the wrong end’, in practice it is almost always right to use the Multi-arch field to decide if the build or host version (or both) of a package should be installed. By marking the exceptions to this rule in a packages’ build-dependencies we minimise the package metadata changes needed (most packages will need no changes to their Build-Depends for this reason).
Exceptions to the normal case are specified using the build-dependency qualifiers “:native” and “:any”. “:native” is appended to a build-dep to signify that it should be installed for the build (i.e ‘native’) architecture rather than the host architecture. It can be used on Multi-Arch: same, allowed or None packages. The “:any” qualifier signifies that a Multi-Arch: allowed build-dep should be treated as ‘foreign’. i.e. it allows the dependency to be met by any package with an architecture that can be executed on the builder, regardless of whether that is the same architecture as the current DEB_BUILD_ARCH. In order to maintain backwards-comptibility, the table of which arch a dependency resolves to is somewhat unintuitive.
Build Dependencies are resolved according to this table:
This is about how to set up a cross compiler for the Raspberry Pi and use it for building target executables from C source code. Additionally, we are going to talk about how to set up userspace emulation – enabling you to execute target binaries on your host system transparently.
We will also look a bit into the details of the cross-toolchain provided on Github and what is contained with it, just out of curiosity.
Let’s jump right into it!
1 Get the Raspberry Pi Toolchain
In order to be able to compile C code for the target system (that is, the Raspberry Pi), we will need two things:
- A cross compiler and its associated tools (cross-toolchain)
- Standard libraries, pre-compiled for the target
You can get both of them from the Raspberry Pi Tools repo on Github. Let’s create a working directory first:
Clone the tools repo into your working directory:
You will end up with a tools directory that contains the C compiler(s) and everything:
You will also find different pre-compiled versions of the C library there; this one, for example:
If we take a closer look at the two listings above, we notice a difference in the output of the file program: While the first one, applied to the gcc executable, states that this is to be run on your x86-compatible machine, the second listing clearly reveals you cannot do this for the libc library.
All of that is an indication that we are actually dealing with a cross-toolchain. Great start!
2 Select a Toolchain to use
After downloading you will notice there’s more than one toolchain folder inside of the tools folder. Here’s a list of what you can find there:
After resolving all symlinks, it boils down to three cross-toolchains we can choose from. Here are the corresponding gcc executables:
They all can build binaries for the target architecture, that’s for sure. But what are the differences, and which one should you use for compiling your code?
According to the Linaro GCC FAQ Page, the toolchain names follow a well-known theme. I’ve prepared a table that shows the parts between the “-“ sign, each in its own column:
|arm||bcm2708hardfp||linux-gnueabi||for 32-bit host|
|arm||bcm2708||linux-gnueabi||for 32-bit host|
|arm||rpi-4.9.3||linux-gnueabihf||for 64-bit host|
In the table, bcm2708 is the name of the device family of the Raspberry Pi’s SoC (BCM2835 in case of A+). The term gnueabi, according to the Linaro FAQ, is just an arbitrary name for an Application Binary Interface (ABI) version that came after gnu. It is basically short for what they call AArch32.
Eventually, they will all produce binaries for the Raspberry Pi. The difference is how much the result is optimized to your target system. From the naming alone, I would assume the only difference is how the compiler deals with floating point numbers, i.e. whether it utilizes the Pi’s (hardware) floating-point unit or not.
But actually, I don’t think it’s that simple: There are hundreds of possible tuning parameters which were selected while building these compilers. There is a neat description on the Raspberry Pi forums: Difference between arm-linux-gnueabi and bcm2708?.
As a side note: You may have trouble using the first two compilers on your 64-bit host system and get an error message that looks like this:
error while loading shared libraries: libz.so.1: cannot open shared object file: no such file or directory.
This is a rather cryptic description; Chances are, you’re just missing the appropriate x86 libraries and that’s the reason they cannot be found. You can fix this by either:
- just using the third one, which is 64-bit 🙂 or
- installing the needed library on your host system (refer to AskUbuntu: Error while loading shared library libz.so.1 while cross compiling for arm-linux)
Long story short: You can take any of the toolchains offered, depending on your host system’s architecture. For 32 bit, you can choose from two of them.
3 Build a sample C program
At this point we should be able to compile a Hello World program, to see if the toolchain works:
To give it a try, you can copy the entire snippet into a shell script and execute it. In case you are wondering what the EOF markers are about: The C code is embedded using a technique called the Here Document. Actually, that’s just an unimportant detail that saves us from having to create another c file. So, don’t worry too much about it if it seems confusing.
Anyways, the script will generate an executable called a.out and show some details about it:
Note that the resulting executable has been linked statically against the standard library. This makes it easier for us to deal with dependencies in this example. The principle is the same for dynamic linking, it’s just a bit more complex to set up (because you have to deal with selecting the loader and such).
4 Set up Qemu User Mode Emulation
You can see from the output above that we are dealing with an ARM executable. That also means we cannot execute the program on our host machine. In order to fix that, let’s set up a way to emulate the ARM on our host machine.
That approach is really interesting because it makes the emulation process transparent to you. Upon installation, Qemu registers a set of binary formats ( binfmt ) with the Linux kernel.
Let’s install the package(s) needed for Qemu:
You can verify installation by inspecting the output of update-binfmts –display :
There is a previous article on this blog about some experiments with Linux bin formats you may find interesting in this context: Deardevices: Linux Shebang Insights.
Executing the target binary should now be as easy as:
5 Transfer Binary to the Pi and execute it there
Executing on the target should work just as well. Let’s copy the binary and execute it via ssh:
That’s it! We are done.
In this article, we have looked a bit into the Raspberry Pi tools repository and selected a cross-compiler we then used to compile an executable for the target system. We have set up QEMU User Emulation so we could run the compilation result on our host Linux.
Where to go from here?
To keep it simple for now, we haven’t messed with dynamic linking yet. That’s indeed a limitation for practical use so we may talk about it in another article on this blog.
Last Updated on February 9th, 2020 by App Shah 31 comments
Apache Maven is a Software Project Management tool. Based on the concept of a Project Object Model ( POM ), Maven can manage a project’s build, reporting and documentation from a central piece of information.
On Crunchify, we do have more than
20 different maven tutorials including Setting up Maven Classpath on Windows and MacOS, maven-war-plugin, maven-shade-plugin, maven-assembly-plugin, etc.
In this tutorial we will go over highly and widely used some tips and tricks which will fix most of the Maven and POM dependency related issues for your in Eclipse IDE.
Below tips will work if you have any of below questions:
- In Java how to fix Maven, Maven Configuration and Maven Dependency issue?
- How do I update my Maven project to work in Eclipse?
- maven dependency problem in eclipse
- maven dependency problem eclipse missing artifact
- How to fix error “Updating Maven Project”?
- Maven Common Problems And Solutions
Let’s get started:
Task-1 : Perform “Project Clean” in Eclipse IDE
- Create new maven based project or open existing maven project.
- In my case, I’m opening my existing Simplest Spring MVC Hello World Project in Eclipse.
- Click on Project Menu item in Eclipse
- Choose Clean. option from list
- Select a project which you want to clean or Select All
- In my case it’s just CrunchifySpringMVCTutorial
Task-2 : Perform Maven Update Project in Eclipse IDE
- Right click on Project
- Click on Maven
- Click on Update Project.
Task-3 : Perform Maven clean install in Eclipse IDE
- Right click on project
- Click on Run As
- Click on Maven build.
- Provide Goal: mvn clean install
- Select checkbox for Skip Tests
- Click Apply and Run
You should see BUILD SUCCESS message after successful run.
By performing above steps most of the common Maven build issues should be resolved in Eclipse. Let me know if you face any more issues and will try to debug.
Nothing worked and you are still getting weird Maven issue ? Try deleting .m2/repository folder via File Explorer .
-> After that perform above steps and all maven libraries will be downloaded again fresh.
Join the Discussion
If you liked this article, then please share it on social media. Still have any questions about an article, leave us a comment.
If you are using an APT package manager to install various packages on Ubuntu, Debian, Linux Mint, Elementary OS, MX Linux, or other similar Linux, then you can ignore or exclude some dependencies which don’t want to be on your system.
For example, lately, I was doing an article on the installation of the Lighttpd web server on Ubuntu 20.04, where I had to skip one dependency or package while installing PHP and its extensions. However, by default, while installing those packages it will also install the apache2 webserver that I didn’t because I already had Lighttpd, thus I want the APT package manager to hold that single Apache2 package while installing the others.
Here is the example:
sudo apt-get install php php-cgi php-cli php-fpm php-curl php-gd php-mysql php-mbstring zip unzip
In the above output, you can see the Red color text is apache2 which is going to be installed automatically even though I don’t need it. Thus, to exclude that I will use a simple flag that is – Dash, minus or hyphen, whatever you want to call it. Therefore, whatever packages you want to remove, you have to use this – at the end of them while issuing the command.
For example :
In the following command, I want to ignore or exclude the Apache2 package as a dependency.
sudo apt-get install php php-cgi php-cli php-fpm php-curl php-gd php-mysql php-mbstring zip unzip
Then what I will do, I simply type the name of the package with hyphen -. Therefore, the above command will be like this:
sudo apt-get install php php-cgi php-cli php-fpm php-curl php-gd php-mysql php-mbstring zip unzip apache2-
If you want to exclude all packages related to the one you want to exclude, simply give asterisk mark. Let’s say in the above command I want to ignore all packages related to apache2 then I have to add apache2*- at the end of the command with * and – sign.
And this time the output for the same command will be like this:
You can see it that this time the Apache2 package is not in the list of NEW packages that are going to install.
Therefore to ignore the dependencies while installing some package using APT package manager, we just need to add a minus sign – at the end of the dependency, you want to exclude.
on 8 February 2022
- Share on:
One of the core concepts of snaps is cross-distro compatibility. Developers can build their snaps once, and they should run well on more than 40 different Linux distros. But how does one take care of all the required runtime dependencies? By providing them inside the snap, as part of the bundle.
In the snap ecosystem, the functionality is satisfied through stage packages, a list of libraries and other runtime components declared for every application included inside the snap. What makes things rather interesting is how this list is created. In this blog post, we want to take you through the journey of dependency mapping, and how Snapcraft, the command-line tool used to build snaps, can assist you in the process.
Snaps are self-contained, isolated archives, designed to run independently of the underlying system. The abstraction is provided by the snapd service, which enables snaps to start and run. From inside the snap, applications do not see the host’s filesystem; instead, they see a layer called base, a set of libraries that provides a minimal functional environment for these applications.
A base is aligned to one of Ubuntu’s LTS releases. For instance, core18 is aligned to Ubuntu 18.04 LTS. This means that a snap build with base: core18 will “think” it runs on top of Ubuntu 18.04, even though the external operating system may be Ubuntu 20.04 or perhaps Fedora 34.
In addition to the base, developers may need to declare additional runtime dependencies, which are not provided in this minimal layer. These will be the stage packages, which need to be declared for the applications. The big question is, how does one know what to add?
If you are well familiar with the application you’re snapping, then you most likely know what needs to be included. Even so, you may not necessarily be aware of the intricate dependencies, especially if you compile or include additional toolkits into your program.
In the classic Linux world, an easy way to detect (most) runtime dependencies is to run the ldd tool against a binary, which will then provide a list of the libraries that the binary needs. Usually, these will be located under /usr or /lib on the host system.
As a developer, you can use this output as your baseline. Now, you can reverse trace the name of the packages that provided these libraries. For instance, on Linux distributions that use deb packages and the dpkg management tool, you can use the -S option to trace the upstream packages.
This way, you can create a rudimentary list of stage packages for your applications. You will need to list the names of packages that provide the libraries, except libraries already provided by the base. Please note that you will need to use the names (and versions) that match the base you specified in your snap. As a crude example, in Ubuntu 18.04, you have libncurses5, while Ubuntu 21.04 offers libncurses6.
If you’re using a non-Ubuntu system for your package discovery, you will need to “translate” the names of packages that your distro reports (say Manjaro or openSUSE) and see what such packages are called in Ubuntu. Alternatively, you could build your own non-Ubuntu base, but this is far from a trivial exercise.
The use of ldd is not comprehensive, though. If your application uses the dlopen() function, you might not necessarily know what’s missing. In that case, you will need to build your snap, let it run, and then examine the runtime failure error. Typically, you will see what libraries cannot be found, and will then need to be mapped to package names, and added to the stage package list.
To help you build your snaps more effectively, Snapcraft will try to do a lot of the search and discovery for you. The exact behavior of the tool somewhat depends on which core you use, but in essence, Snapcraft will try to best guess what you need. The following method should get you going:
- Build a snap without declaring any stage packages. See if there are any build errors, or if Snapcraft has resolved the missing dependencies for you.
- Add any packages suggested by Snapcraft.
- Repeat the process.
In the build output above, the command-line output of the Snapcraft build process explicitly tells the user (developer) that it could not satisfy two dependencies. This means they most likely do not exist in the Ubuntu archives (for the chosen base), and you will need to provide them yourself. Often, this will need using the dump plugin, and providing additional libraries or deb packages from your own local sources or a different online repository.
To make your development process even more efficient, you may also want to consider using the snap try and snapcraft pack commands, allowing you to quickly make changes to your work project before you assemble it into a final snap artifact. This can be quite useful especially in figuring out any missing runtime dependencies.
Hopefully, this article demystifies some of the intricacies of Linux dependency search and discovery, and what steps developers need to take to build their snaps quickly and efficiently. Snapcraft will try to make a lot of intelligent guesses, and resolve the list of needed stage packages for you, making the overall experience more pleasant. If you have any questions or suggestions, please join our forum, and let us know.
Talk to us today
Interested in running Ubuntu in your organisation?
Ubuntu Make is a command line tool which allows you to download the latest version of popular developer tools on your installation, installing it alongside all of the required dependencies (which will only ask for root access if you don’t have all the required dependencies installed already), enable multi-arch on your system if you are on a 64 bit machine, integrate it with the Unity launcher. Basically, one command to get your system ready to develop with!
First, let’s define the core principles around the Ubuntu Make and what we are trying to achieve with this:
- Ubuntu Make will always download, test and support the latest available upstream developer stack. No version is stuck in stone for 5 years. We get the latest and the best release that upstream delivers to all of us. We are conscious that being able to develop on a freshly updated environment is one of the core values of the developer audience and that’s why we want to deliver that experience with Ubuntu Make.
- We know that developers want stability overall and not have to upgrade or spend time maintaining their machine every 6 months. We agree they shouldn’t have to and the platform should “get out of my way, I’ve got work to do.” That’s the reason why we focus heavily on the latest LTS release of Ubuntu. All tools will always be backported and supported on the latest Long Term Support release. Tests are running multiple times a day on this platform. In addition to this, we support, of course, the latest available Ubuntu Release for developers who likes to live on the edge!
- We want to ensure that the supported developer environment is always functional by always downloading the latest version from upstream. The software stack can change its requirements, requiring newer or extra libraries and thus cause breakage. That’s why we are running an entire suite of functional tests multiple times a day, on both versions that you can find in distro and on the latest trunk. That way we know if:
- We broke ourselves in trunk and needs to fix it before releasing.
- The platform broke one of the developer stack and we can promptly fix it.
- A third-party application or a website changed and broke the integration. We can then fix this really early on.
All of those tests running will ensure the best experience we can deliver, while always fetching the latest release version from upstream. All of this on a very stable platform!
How to use it
Example: how to install Ubuntu Make and then, Android Studio.
Installing Ubuntu Make
You can install the snap package (not working at the moment on 17.10)
If you run the snap you have to run ubuntu-make.umake
If you’re running 17.10 or want to run the “traditional” package, you can install from the Ubuntu Make PPA. First, add the PPA to your system:
Then, installing Ubuntu Make:
Example: How to install android-studio
And then, accept the installation path and Google license. It will download, install all requirements alongside Android Studio and latest android SDK itself, then configure and fit it into the system like by adding an Unity launcher icon…
And that’s it! Happy Android application hacking on Ubuntu. You will find the familiar experience with the android emulator and sdk manager + auto-updater to always be on the latest.
How to contribute
Reports bugs and propose enhancements
The more direct way of reporting a bug or giving any suggestions is through the upstream bug tracker.
The tool is really to help developers, so do not hesitate to help us directing the Ubuntu Developer Tools Center on the way which is the best for you.
We already had some good translations contributions through launchpad! Thanks to all our translators, we got Basque, Chinese (Hong Kong), Chinese (Simplified), French, Italian and Spanish! There are only few strings up for translations in Ubuntu Make and it should take less than half an hour in total to add a new one. It’s a very good and useful way to contribute for people speaking other languages than English! We do look at them and merge them in the mainline automatically. Contribute on the code itself
Some people started to offer code contribution and that’s a very good and motivating news. Do not hesitate to fork us on the upstream github repo. We’ll ensure we keep up to date on all code contributions and pull requests. If you have any questions or for better coordination, open a bug to start the discussion around your awesome idea. We’ll try to be around and guide you on how to add any framework support! You will not be alone!
Write some documentation
We have some basic documentation (this wiki page!). If you feel there are any gaps or any missing news, feel free to edit the wiki page! You can as well merge some of the documentation of the https://github.com/ubuntu/ubuntu-make/blob/master/README.md file or propose some enhancements to it!
To give an easy starts to any developers who wants to hack on Ubuntu Make itself, we try to keep the README.md file readable and up to the current code content. However, this one can deviate a little bit, if you think that any part missing/explanation requires, you can propose any modifications to it to help future hackers having an easier start.
Spread the word!
Finally, spreading the word that Ubuntu Loves Developers and we mean it! Talk about it on social network, tagging with #ubuntulovesdevs or in blog posts, or just chatting to your local community! We deeply care about our developer audience on the Ubuntu Desktop and Server and we want this to be known!
ubuntu-make (последним исправлял пользователь lyzardking 2017-12-21 12:05:21)
This is tutorial for beginners that shows how to install tools, compile the code with gcc-arm-none-eabi and send it to the STM32 using st-flash. It also introduce basics of automation of this task by putting all instructions into Makefile.
A few, complete code examples can be found on GitHub:
1. Installing compiler and stlink
To compile C and/or C++ source code of your firmware you will need gcc-arm-none-eabi compiler and stlink.
What is extremely useful, there are complete and easy to install packages for all major platforms (https://launchpad.net/
At first, we need install dependencies then build it from sources (https://github.com/texane/stlink/blob/master/doc/compiling.md#build-from-sources).
2. Compiling and burning the code
Now that you have the toolchain installed, a next step is to compile the source code into a .ELF, then generate .BIN file and finally burn this this binary file to STM32 chip using ST-Link v2 programmer.
Here is an example content of main.c file. The code does nothing except getting stuck in an endless loop but it’s always something!
The command below will compile your code. It’s GCC so I assume it looks familiar to you and no additional explanations are needed. If you want perform compilation for some other MCU then you need specify at least appropriate -mcpu, .LD and .S files (not provided in this tutorial)
After performing successful compilation, you can check program and data memory size with this command.
Most programmers will not accept a GNU executable as an input file, so we need to do a little more processing. So, the next step is about converting the information form .ELF into .BIN file. The GNU utility that does this is called arm-none-eabi-objcopy.
The utility called st-flash can program processors using the content of the .BIN files specified on the command line. With the command below, the file main.bin will be burned into the flash memory.
Voila! Chip is programmed.
3. Make and Makefiles
Now, we can automate this process by creating a Makefile and putting our commands there. The structure of a Makefile is very simple, and more information about it can be found here. Utility make reads automatically a Makefile file in the folder where you launch it. Take a look at simple Makefile presented bellow.
If you launch a simple make in the terminal, only label “all” will be executed. When you launch make flash label “flash” will be executed, and so on.
Essentially, assuming that our program is in main.c, only those three things are needed to compile and burn the code to STM32 chip.
It’s important to highlight that we can easily automate whole process with Makefiles. Sooner or later you will need it!
In this article, we will learn how to fix the missing dependencies and broken packages using the apt-get command. Note that, we have run the commands and procedure mentioned in this article on a Debian 10 system. The same procedure can be followed in Ubuntu and older Debian versions.
We will use the command-line Terminal for trying the solutions and fixing the problem. To open the Terminal application in Debian, hit the super key on the keyboard and search for it using the search bar that appears. When the search result appears, click on the Terminal icon to open it.
Using apt-get to fix missing and broken packages
Apt-get is a Terminal based package management tool used for installing, upgrading, and removing packages. Along with these features, it also has flags that can be used for fixing missing dependencies and broken packages.
Use the “fix-missing” option with “apt-get update” to run the updates and ensure the packages are up to date and there is no new version available for the packages.
Once you are done with the update, execute the below command in order to force the package manager to find any missing dependencies or broken packages and install them.
Another approach to solving the broken package issue via apt-get is to edit the “/etc/apt/sources/list” file and adding sites with newer versions of packages available. Then running the “apt-get update” command to update the repository list.
If the above method does not fix the issue of broken dependencies and broken packages and still you are receiving the error, then try the following methods.
In this method, we will use the “apt-get autoremove” and the “dpkg” in order to fix missing dependencies and broken packages.
1. Update the repository index by executing the below command in Terminal:
2. Next, execute the below command to clean out the local repository:
3. Execute the below command to remove all the unnecessary packages that are no longer needed:
The above command will display the unmet dependencies or broken package’s name.
4. Then try executing the below command in Terminal to force remove the broken package:
In the following method, we will use the “dpkg—configure” command in order to fix missing dependencies and broken packages.
Dpkg is a package management tool that can be used to install, remove and manage packages. Similar to apt-get, it can also help to fix broken packages and missing dependencies. If you receive some errors while installing or updating the packages, try the following solution with dpkg:
1. Execute the below command in the Terminal to reconfigure all the partially installed packages.
If the above command does not work, like in our case and you see similar results displaying the erroneous package, then try removing the package.
2. Execute the below command in Terminal in order to remove the erroneous package.
3. Then use the below command to clean out the local repository:
After trying any one of the above solutions, run the update command to ensure the dependencies are resolved and broken packages are fixed or removed.
Fixing the dependency and broken packages errors and then returning the system to the normal state may take hours. Sometimes it gets so complicated that when you finally fix it, you feel so lucky. We have presented some solutions regarding this error, so please give them a try. If you know some of the possible solutions we did not mention, please let us know in the comments.
About the author
Karim Buzdar holds a degree in telecommunication engineering and holds several sysadmin certifications. As an IT engineer and technical author, he writes for various web sites. He blogs at LinuxWays.
Ubuntu has thousands of .deb files in the official and unofficial repositories. But, all packages will not be available in DEB format. Some times, packages might be available only for RPM based distros, or Arch based distros. In such cases, it’s important to know how to create a .deb file from source file. In this brief tutorial, let us see how to create a .deb file from Source file in Ubuntu 16.04 LTS. This guide should work on all DEB based systems such as Debian, Linux Mint, and Elementary OS etc.
Create a .deb file from Source in Ubuntu
First, we need to install the required dependencies to compile and create DEB file from source file.
We have installed the required dependencies. Let us go ahead and download the source file of a package.
Downloading source tarballs
For the purpose of this tutorial, let us create .deb file for Leafpad source file. As you know already, Leafpad is the simple, graphical text editor.
Go to the Leafpad home page and download the tar file.
Then, extract the downloaded tar file as shown below.
Then, go to the extracted folder, and run the following commands one by one to compile the source code:
Note: In case ./configure command is not found, skip it and continue with next command.
Finally, run the following commands to create .deb file from source code.
Type Y when asked to create the description for the Deb file.
Next, type the description for the DEB file, and press ENTER double time to continue.
In the next screen, you will see the details of source file that you are going to create a DEB file from it. The DEB package will be built according to these details.
Review the details, and change them as your wish.
For example, I want to change the maintainer Email id. To do so, press number “0”. Type the maintainer email, and press ENTER key.
Finally, press Enter if you ok with details.
The .deb Package has been built successfully, and installed automatically.
The .deb will be saved in the directory where you extracted the source file.
Let us view the contents of the source directory:
As you can see in the above output, the deb file has been successfully created and saved in the source directory itself.
You can also remove the installed deb package as shown below.
I have tested these guide with Leafpad and 7zip source files. It worked like a charm as I described above.
That’s all for now. You know now how to create .deb file from its source file. I will be soon here with another interesting article. Until then, stay tuned with OSTechNix.
If you find this article useful, please share it on your social networks and support us.
Senthilkumar Palani (aka SK) is the Founder and Editor in chief of OSTechNix. He is a Linux/Unix enthusiast and FOSS supporter. He lives in Tamilnadu, India.
Last modified: October 12, 2020
Get started with Spring 5 and Spring Boot 2, through the Learn Spring course:
Upgrading Maven dependencies manually has always been a tedious work, especially in projects with a lot of libraries releasing frequently.
In this tutorial, we’ll learn how to exploit the Versions Maven Plugin to keep our dependencies up-to-date.
Above all, this can be extremely useful when implementing Continuous Integration pipelines that automatically upgrade the dependencies, test that everything still works properly, and commit or rollback the result, whichever is appropriate.
2. Maven Version Range Syntax
Back in the Maven2 days, developers could specify version ranges within which the artifacts would’ve been upgraded without the need of a manual intervention.
This syntax is still valid, used in several projects out there and is hence worth knowing:
Nonetheless, we should avoid it in favor of the Versions Maven Plugin when possible, because advancing concrete versions from the outside gives us definitely more control than letting Maven handle the whole operation on its own.
2.1. Deprecated Syntax
Maven2 also provided two special metaversion values to achieve the result: LATEST and RELEASE.
LATEST looks for the newest possible version, while RELEASE aims at the latest non-SNAPSHOT version.
They’re, indeed, still absolutely valid for regular dependencies resolution.
However, this legacy upgrade method was causing unpredictability where CI needed reproducibility. Hence, they’ve been deprecated for plugin dependencies resolution.
3. Versions Maven Plugin
The Versions Maven Plugin is the de facto standard way to handle versions management nowadays.
From high-level comparisons between remote repositories up to low-level timestamp-locking for SNAPSHOT versions, its massive list of goals allows us to take care of every aspect of our projects involving dependencies.
While many of them are out of the scope of this tutorial, let’s take a closer look at the ones that will help us in the upgrade process.
3.1. The Test Case
Before starting, let’s define our test case:
- three RELEASEs with a hard-coded version
- one RELEASE with a property version, and
- one SNAPSHOT
Finally, let’s also exclude an artifact from the process when defining the plugin:
4. Displaying Available Updates
First of all, to simply know if and how we can update our project, the right tool for the job is versions:display-dependency-updates:
As we can see, the process included every RELEASE version. It even included commons-collections4 since the exclusion in the configuration refers to the update process, and not to the discovery one.
In contrast, it ignored the SNAPSHOT, for the reason that it’s a development version which is often not safe to update automatically.
5. Updating the Dependencies
When running an update for the first time, the plugin creates a backup of the pom.xml named pom.xml.versionsBackup.
While every iteration will alter the pom.xml, the backup file will preserve the original state of the project up to the moment the user will commit (through mvn versions:commit) or revert (through mvn versions:revert) the whole process.
5.1. Converting SNAPSHOTs into RELEASEs
It happens sometimes that a project includes a SNAPSHOT (a version which is still under heavy development).
We can use versions:use-releases to check if the correspondent RELEASE has been published, and even more to convert our SNAPSHOT into that RELEASE at the same time:
5.2. Updating to the Next RELEASE
We can port every non-SNAPSHOT dependency to its nearest version with versions:use-next-releases:
We can clearly see that the plugin updated commons-io, commons-lang3, and even commons-beanutils, which is not a SNAPSHOT anymore, to their next version.
Most importantly, it ignored commons-collections4, which is excluded in the plugin configuration, and commons-compress, which has a version number specified dynamically through a property.
5.3. Updating to the Latest RELEASE
Updating every non-SNAPSHOT dependency to its latest release works in the same way, simply changing the goal to versions:use-latest-releases:
6. Filtering out Unwanted Versions
In case we want to ignore certain versions, the plugin configuration can be tuned to dynamically load rules from an external file:
Most noteworthy, can also refer to a local file:
6.1. Ignoring Versions Globally
We can configure our rules file so that it’ll ignore versions matching a specific Regular Expression:
6.2. Ignoring Versions on a Per-Rule Basis
Finally, in case our needs are more specific, we can build a set of rules instead:
We’ve seen how to check and update the dependencies of a project in a safe, automatic, and Maven3-compliant way.
As always, the source code is available over on GitHub, along with a script to help showcase everything step-by-step and without complexity.
To see it in action, simply download the project and run in a terminal (or in Git Bash if using Windows):
A while ago, we have published a guide about a tool UKUU that is used to install, and/or update latest Linux kernel in DEB-based systems, such as Ubuntu, Linux Mint. Today, we will see about a similar tool called “Linux Kernel Utilities”. It is a set of BASH shell scripts that can be used to compile and / or update latest Linux kernels for Debian and derivatives.
Linux Kernel Utilities contains the following three scripts.
- compile_linux_kernel.sh – Compile and install the latest Linux Kernel from source,
- update_ubuntu_kernel.sh – Download and install or update the precompiled Ubuntu Kernel,
- remove_old_kernels.sh – Remove all inactive/unused Linux Kernels.
In this brief guide, I will explain how to install and use Linux Kernel Utilities in Ubuntu 16.04 LTS.
Linux Kernel Utilities – Scripts To Compile And Update Latest Linux Kernel
Install Linux Kernel Utilities
We can install Linux Kernel Utilities in two ways.
The recommended way to do this is git clone the repository using command:
The above command will clone the contents of the repository in a folder called “linux-kernel-utilities” in your current working directory.
Go that directory:
Make the scripts executable using command:
Scripts will prompt to update when necessary. To update them, just run:
Another way to install this script is download the DEB packages and install it manually.
Go to the Releases page and download the latest version. As of writing this guide, the latest version was 1.1.6.
Then, install it as shown below.
All scripts will be installed under /opt location. You can execute the scripts from here.
To remove it, run:
Compile Linux Kernel
As I mentioned in the introduction section, Linux Kernel Utilities consists of three scripts. compile_linux_kernel.sh script is used to download and compile the latest Kernel from http://www.kernel.org website. This script will display the list available Linux Kernels in that site, so you can pick one from the list.
Run the following command to list the available Kernel. You don’t need to run these scripts as sudo or root user. You will be prompted to enter the root password or sudo password if necessary.
Click OK to continue.
The first time this script will install missing dependencies if there are any.
Next, select a Kernel from the list to download.
Just follow the onscreen instructions to compile and install the selected Linux Kernel.
To compile and install the latest available Linux Kernel, run:
Also, you can compile and install a Kernel from local archive file.
Download and install precompiled Linux Kernel
update_ubuntu_kernel.sh script will allow you to download and install or update the list of available Linux Kernels from https://kernel.ubuntu.com website.
To install precompiled Kernel from, run:
It will list all available precombiled Linux Kernels from Kernel.ubuntu.com website. Just enter any number from the list to install the selected Kernel.
After installing the new Kernel, reboot and log in to newly installed Kernel.
To install latest available Linux Kernel, run:
The above command directly pick the latest available from the Kernel.ubuntu.com website and install it.
Remove inactive Linux Kernels
remove_old_kernels.sh script will remove inactive and unused Kernels from your Ubuntu system. Please be careful while using this script. It will only leave the currently loaded Linux Kernel. All old Kernels will be removed. It is highly recommended that a reboot is required before executing this script.
Type ‘y’ and hit Enter to remove the old kernels. You’ll be asked to enter your sudo user password to uninstall old kernels.
Now, the old kernels have been removed from your Ubuntu system.
And, that’s all. Hope this helps. If you find this guide useful, please share it on your social, professional networks and support OSTechNix. I will be soon here with another interesting guide. Till then, stay tuned!
Welcome to the Linux Mint forums!
- Unanswered topics
- Active topics
unmet dependencies (while installing opencv for c++)
unmet dependencies (while installing opencv for c++)
Post by DryIce » Fri Oct 20, 2017 3:32 pm
I followed the following commands while installing :
[compiler] sudo apt-get install build-essential
[required] sudo apt-get install cmake git libgtk2.0-dev pkg-config libavcodec-dev libavformat-dev libswscale-dev
[optional] sudo apt-get install python-dev python-numpy libtbb2 libtbb-dev libjpeg-dev libpng-dev libtiff-dev libjasper-dev libdc1394-22-dev
[opencv-dev] sudo apt-get install libopencv-dev
After getting error I tried:
sudo apt-get install -f
Didn’t solve the issue
Then I tried:
sudo apt-get remove libopen-cv
Here is the output:
I also tried:
$ sudo apt-get clean
$ sudo apt-get autoclean
$ sudo apt-get -f install
$ sudo apt-get autoremove
one after another , then again tried to install libopencv-dev but didnt’ work .
Every time it says “The following packages have unmet dependencies” just like the first pic
Can someone provide me any solution ?
Thank you very much
Re: unmet dependencies (while installing opencv for c++)
Post by Mute Ant » Sun Oct 22, 2017 7:21 am
I tried your first command. sudo apt-get install libopencv-dev . on an unmodified Mint 17.3 and it installed correctly, adding 86 (!) packages to the OS.
I suspect you are trying to install a ‘foreign’ package from a ‘foreign’ repository. In this context ‘foreign’ simply means ‘newer than the OS’. A repository for Ubuntu Xenial (2016) connected to an OS based on Ubuntu Trusty (2014) would generate reports like this.
If you boot your installer DVD and run a Live Session, does libopencv-dev install correctly then? HINT: If it does, you can copy the downloaded deb files in /var/cache/apt/archives for later use in your misbehaving system.
The Software Sources accessory has tools to help, either to remove a PPA or to remove ‘foreign’ packages from the OS.
- Important Notices
- ↳ Rules & Notices
- ↳ Releases & Announcements
- ↳ Main Edition Support
- ↳ Beginner Questions
- ↳ Installation & Boot
- ↳ Software & Applications
- ↳ Hardware Support
- ↳ Graphics Cards & Monitors
- ↳ Printers & Scanners
- ↳ Storage
- ↳ Sound
- ↳ Networking
- ↳ Virtual Machines
- ↳ Desktop & Window Managers
- ↳ Cinnamon
- ↳ MATE
- ↳ Xfce
- ↳ Other topics
- ↳ Non-technical Questions
- ↳ Tutorials
- Debian Edition Support
- ↳ LMDE Forums
- ↳ Beginner Questions
- ↳ Installation & Boot
- ↳ Software & Applications
- ↳ Hardware Support
- ↳ Networking
- ↳ Tutorials
- ↳ Other Topics & Open Discussion
- ↳ LMDE Archive
- ↳ Gaming
- ↳ Scripts & Bash
- ↳ Programming & Development
- ↳ Themes, Icons & Wallpaper
- ↳ Compiz, Conky, Docks & Widgets
- ↳ Screenshots
- ↳ Your Artwork
- ↳ Introduce Yourself
- ↳ Chat about Linux Mint
- ↳ Chat about Linux
- ↳ Open Chat
- ↳ Suggestions & Feedback
- ↳ Translations
- ↳ Deutsch – German
- ↳ Español – Spanish
- ↳ Français – French
- ↳ Italiano – Italian
- ↳ Nederlands – Dutch
- ↳ Português – Portuguese
- ↳ Русский – Russian
- ↳ Suomi – Finnish
- ↳ Other Languages
- ↳ Čeština-Slovenčina – Czech-Slovak
- ↳ Magyar – Hungarian
- ↳ 日本語 – Japanese
- ↳ Polski – Polish
- ↳ Svenska – Swedish
- ↳ Українська – Ukrainian
- Board index
- All times are UTC-04:00
- Delete cookies
- Contact us
Powered by phpBB® Forum Software © phpBB Limited