tag:blogger.com,1999:blog-81685795668535972252024-03-12T17:16:51.680-06:00Open MascaA blog for Open Source, FPGA's and stuff.Anonymoushttp://www.blogger.com/profile/15102034190185236231noreply@blogger.comBlogger12125tag:blogger.com,1999:blog-8168579566853597225.post-64204232685781594782013-02-25T14:24:00.000-06:002013-02-25T14:24:09.045-06:00Moving from single HD to RAID1<span style="font-family: inherit;"><br /></span>
<span style="font-family: inherit;">Well after thinking all was good and working, during the week again I started to see the same error messages at boot. That is wrong.</span><br />
<span style="font-family: inherit;"><br /></span>
<span style="font-family: inherit;">So I decided to get rid of that HD once and for all.</span><br />
<span style="font-family: inherit;"><br /></span>
<span style="font-family: inherit;">The original plan was to setup an asymetric RAID 1 array with a new HD and the old one but it looks like a bad idea right now.</span><br />
<span style="font-family: inherit;"><br /></span>
<h2>
<span style="font-family: inherit; font-size: large;">The hardware</span></h2>
<span style="font-family: inherit;">First of all I had this two drives:</span><br />
<div>
<blockquote class="tr_bq">
<span style="font-family: inherit;">Original disk 1.5TB 32MB cache, 7200 512/512 (logical/physical)</span></blockquote>
<blockquote class="tr_bq">
<span style="font-family: inherit;">New disk 2.0TB 64MB cache, 7200 512/4096 (logical/physical)</span></blockquote>
</div>
<div>
<span style="font-family: inherit;">So I went to the store and got another HD of the same model of the new disk so I can get an efficient easy to setup and manage RAID1 array without any "special cases".</span><br />
<span style="font-family: inherit; font-size: x-small;"><br /></span>
<h2>
<span style="font-family: inherit; font-size: large;">Procedure</span></h2>
<span style="font-family: inherit;">The <a href="https://wiki.archlinux.org/">Archlinux Wiki</a> have a really good article regarding to <a href="https://wiki.archlinux.org/index.php/Convert_a_single_drive_system_to_RAID">Convert a single drive system to RAID</a> and as always, it is a good idea to follow it. Most of what I did was following that tutorial almost verbatim so I will not try to reproduce it here. I will just mention some of the variations I used to setup my box.</span><br />
<span style="font-family: inherit;"><br /></span></div>
<div>
<span style="font-family: inherit;">The original disk have the partitions as follows:</span></div>
<blockquote class="tr_bq">
<span style="font-family: inherit;">/dev/sda1 NTFS</span><br />
<span style="font-family: inherit;">/dev/sda2 /boot</span><br />
<span style="font-family: inherit;">/dev/sda3 /</span><br />
<span style="font-family: inherit;">/dev/sda4 /home</span><br />
<span style="font-family: inherit;">/dev/sda5 swap</span></blockquote>
<div>
<h3>
<span style="font-family: inherit; font-size: small;">To windows or not to Windows?</span></h3>
</div>
<div>
<span style="font-family: inherit;">As you can see, I had an NTFS patition, there I had installed Windows with the only purpose to be able to play StarCraft II with all the power of my graphic card. I only boot that partition to play and fight with the annoying Windows updates that keeps failing keeping me away from playing when I want.</span></div>
<div>
<span style="font-family: inherit;"><br /></span></div>
<div>
<span style="font-family: inherit;">After searching a little in the web I found that it is reported to actually run very well from Wine. I found the <a href="http://appdb.winehq.org/objectManager.php?sClass=version&iId=20882">Wine page</a> and also an article from the <a href="https://wiki.archlinux.org/index.php/StarCraft_2">ArchWiki</a>.</span></div>
<div>
<span style="font-family: inherit;"><br /></span></div>
<div>
<span style="font-family: inherit;">I have been told (and it makes a lot of sense because it is a broken system) that Windows doesn't like to be moved from one HD to another, it will detect the different device and bitch all the way up to force me to do a fresh install, which will break all my partitions and provide me with a full dose of pain.</span></div>
<div>
<span style="font-family: inherit;"><br /></span></div>
<div>
<span style="font-family: inherit;">All said, I decided to not install a Windows partition and use that space in disk for something more useful than storing Windows (like being there just empty).</span></div>
<div>
<span style="font-family: inherit;"><br /></span>
<h3>
<span style="font-family: inherit;">Prepare the disk</span></h3>
<span style="font-family: inherit;">First of all, I needed to decide how I wanted to partition my disk. I decided to keep the partition schema for my base system as it was, with /boot, / and /home split and adding the 500GB extra from this new drive to the /home partition.</span><br />
<span style="font-family: inherit;"><br /></span>
<span style="font-family: inherit;">I noticed that I didn't had a partition to install and play with other distributions or OS (I have an eye on ArchBSD and FreeBSD as a way to get away from Lennart's rule). So I will change the NTFS partition o another ext* one.</span><br />
<span style="font-family: inherit;"><br /></span>
<span style="font-family: inherit;">That said, it is time to setup the partitions...</span><br />
<span style="font-family: inherit;">But it looks like there is an option to MBR partition scheme: GPT. WTF is it and, do I want it?</span><br />
<span style="font-family: inherit;"><br /></span>
<span style="font-family: inherit;">TL;DR, GPT does not have the 4 primary partition limit that MBR has. And after asking the ArchWiki about <a href="https://wiki.archlinux.org/index.php/Partitioning#Choosing_between_GPT_and_MBR">Choosing between GPT and MBR</a> it ends up It doesn't matter since I will not boot Windows nor use Grub Legacy as boot loader, plus I want more than 4 partitions without being bitched at some legacy problems.</span><br />
<span style="font-family: inherit;"><br /></span>
<span style="font-family: inherit;">So I decided to use GPT, all in all, it was a </span>pleasant<span style="font-family: inherit;"> the bigger difference was using the GPT tools which are the same as the good old ones but with a nice 'g' in there</span>: gdisk, sgdisk, cgdisk. They all are part of the <a href="https://www.archlinux.org/packages/?name=gptfdisk">gptfdisk</a> package<br />
<span style="font-family: inherit;"><br /></span>
<h3>
Setup the RAID1</h3>
</div>
<div>
After having the disk setup done, I needed to follow the tutorial and wait for A LONG time to rsync my partitions.</div>
<div>
<br /></div>
<div>
Once again, I needed to execute the testdisk && fdisk trick from the <a href="http://salinasv.blogspot.com/2013/02/bad-blocks-whatcha-gona-do-when-they.html">last post</a> to get working one of my partitions that keeps breaking up. And after that, all worked just flawlessly.</div>
<div>
<br /></div>
<h3>
Setting up Grub2</h3>
<div>
I guess it worth to mention that Grub2 was a little tricky to install, not because of the <a href="https://wiki.archlinux.org/index.php/GRUB2#GPT_specific_instructions">GPT particularities</a>. but because Grub2 is too different to Grub (legacy) which was really straight forward to configure by editing the menu.lst file.</div>
<div>
<br /></div>
<div>
All you have to do is:</div>
<div>
<ul>
<li><a href="https://wiki.archlinux.org/index.php/GRUB2#Install_to_GPT_BIOS_Boot_Partition">Install Grub2 to the GPT BIOS Boot Partition</a></li>
<li>Tell Grub2 to <a href="https://wiki.archlinux.org/index.php/GRUB2#RAID">load the mdraid</a> module to be able to boot from a raid device by adding</li>
</ul>
<blockquote class="tr_bq">
GRUB_PRELOAD_MODULES="mdraid"</blockquote>
</div>
<div>
<ul>
<li>Tell Grub2 to load the <a href="https://wiki.archlinux.org/index.php/GUID_Partition_Table#BIOS_systems">GPT support</a> by adding the "part_gpt" module.</li>
</ul>
<blockquote class="tr_bq">
GRUB_PRELOAD_MODULES="part_gpt mdraid"</blockquote>
</div>
<h2>
<span style="font-size: large;">Success</span></h2>
<div>
Once this little changes are done, it is all about following the tutorial and wait a lot of time for 2TB to sync. =)</div>
<div>
<br /></div>
<div>
Today I do have my system running over an RAID1 array make my data a little more "secure" against HD failures.</div>
<div>
<br /></div>
<div>
Now I need to figure out what can I do with a 1.5TB HD with some broken blocks that I don't know if will fail again soon. Any ideas?</div>
Anonymoushttp://www.blogger.com/profile/15102034190185236231noreply@blogger.com1tag:blogger.com,1999:blog-8168579566853597225.post-70137921606671038942013-02-10T15:23:00.003-06:002013-02-10T15:23:29.804-06:00Bad blocks whatcha gona do when they come for youThis is an horror story about a hard drive failing and keeping my data away from me.<br />
<br />
This post got a lot bigger than what I expected, TL;DR: Broken HD, testdrive + fsck will save your data from broken blocks which corrupt your superblock.<br />
<br />
Imagine you went to the cinema and when you get back you want to do a nice pacman -Syu and look if the new KDE 4.10 have landed in the repos. You naturally go and power on your desktop and what appen next is this, at boot time just after udev start trying to trigger events.<br />
<div>
<blockquote class="tr_bq">
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjqEsMIaR8-L_Lm5ii6Q9ZJ8sebXiQuclb-KqGaN6MPxFhE3D3xzbtr032HIWzip1q_C6m3c3tfo3bFtRsOfb1PfM9PcAcQNDlBVI3Sxqko5U1j3elJQXnyml7poCtxF1cU2pMgKnbQ2eck/s1600/IMG_20130207_001329.jpg" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="320" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjqEsMIaR8-L_Lm5ii6Q9ZJ8sebXiQuclb-KqGaN6MPxFhE3D3xzbtr032HIWzip1q_C6m3c3tfo3bFtRsOfb1PfM9PcAcQNDlBVI3Sxqko5U1j3elJQXnyml7poCtxF1cU2pMgKnbQ2eck/s320/IMG_20130207_001329.jpg" width="240" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;"><br /></td></tr>
</tbody></table>
<i>ata2.00: exception Emask 0x0 SAct 0x0 SErr 0x0 action 0x0<br />ata2.00: BMDMA stat 0x24<br />ata2.00 failed cmmand READ DMA EXT<br />....<br />ata2.00 status {DRY ERR}</i><br /><i>ata2.00 errro {UNC}</i></blockquote>
<br />
Ok, something is wrong with the hard drive. It can't be <b>that</b> bad since I have been using the computer just few hours ago and turned off normally.<br />
<br />
<h2>
Start debugging</h2>
Let's try to boot the good old Archlinux 2010 install cd and see what happens. It turns out that booting from the livecd I got just the same error. Mmm that is wrong. AFAIK the live cd should not touch the hard drive until I want to mount it.<br />
<br />
Since I could not boot from livecd, I was wondering if I can see my data partition from my Windows system (I have dual-boot system to be able to play Starcraft2, Blizzard, please free me and give us a linux native game!). I knew it was a long shot since, if linux can't boot, sure Windows will never boot too. Still I was out of options so I tried it. Surprisingly, it worked, Windows booted just normally and I was able to see my data partition with the ext3 driver. That is <b>really odd</b>. Still, after browsing a little my files I got a crash on the ext3 driver and it got closed, no problem now I know I can get my files someway and more importantly, they are alive!<br />
<br />
I have started thinking that it must be a software error, something wrong on the kernel. So I went back to the live cd approach, come on, I should be able to boot my live cd. I went into a loop of boot, see if something is wrong in the bios, sometimes change the boot parameters as I read some recommendations in the forums, try to boot the live cd, see the errors, try again. ircpool, libata, acpi parameters, noapic. etc. Anything that avoid reading the HD and let me boot the livecd.<br />
<br />
Fortunately, in one of the iterations I was lazy and waited too much (like 5-10 mins) before restarting the machine and PUM ! the live cd booted. I still get the kernel errors in the buffer but I also can see the good old rc script running and initializing the sistem. Nice, now I know I can boot my computer.<br />
<br />
<h2>
Now it is time to <b>real</b> debugging</h2>
So I went and downloaded a newer version of Archlinux ISO (2013-02-01) so I can have the latest version of every tool and have a nicer resolution since the old ISO predates the KMS.<br />
<br />
<h4>
Let's try to figure out what all this output mean.</h4>
My first approach was to search for the exception code. My surprise is that the good "<i>exception Emask 0x0 SAct 0x0 SErr 0x0 action</i>" is a very generic error message. I have found it with a lot of variations all over the internet, but none of them helped me to fix my issue.<br />
<br />
I found this great page from the libata guys <a href="https://ata.wiki.kernel.org/index.php/Libata_error_messages">Libata error messages</a> which explains exactly what does it means all bits in the error message. So I realized that the <i>DRDY</i> was a good thing, the drive was ready but the <i>ERR</i> one means (yeah, you can guess) there is an error set in the registers. The error was, as the<i> UNC</i> code this one means "<i>Uncorrectable error - often due to bad sectors on the disk</i>". We are fried now, my HD is broken and I lost all my information. At least, now I know the problem<br />
<br />
In a forum, I saw that there is a nice tool that tells you if there are broken blocks on your HD, so I tried it, badblocks /dev/sda and after 4:30hrs~ I got the answer: 16 bad blocks.<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgOI8AT8z7tBnGkO04bL-Pp1XicmFDDLQ5x9tkS0Xl3CY4nzoQBIB15ueWhxPFgZtWnqMMIy6YBM66XJmHbv-1OG1HtAehVZVWGmYinKbe4nRYu09tAwWZyZWYvIW0ovX33c3BbLV6MWVbt/s1600/IMG_20130209_105153.jpg" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="240" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgOI8AT8z7tBnGkO04bL-Pp1XicmFDDLQ5x9tkS0Xl3CY4nzoQBIB15ueWhxPFgZtWnqMMIy6YBM66XJmHbv-1OG1HtAehVZVWGmYinKbe4nRYu09tAwWZyZWYvIW0ovX33c3BbLV6MWVbt/s320/IMG_20130209_105153.jpg" width="320" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;"><span style="font-size: small; text-align: start;">"Bad blocks bad blocks, whatcha gona do, whatcha gona do when they come from you"</span></td></tr>
</tbody></table>
Then I tried to run smartctl tests to see if it can fix the issue or give me more information and it will take just... like 5-6 hrs to complete the test, what a pain. Not having anything else in mind, I ran the test <i> smartctl -t long</i> and went to see a serie for a while. I setup a nice whatch command to show me the output of <i>smartctl -l selftest</i> and see the progress.<br />
<br />
From the output of smartctl I figured out that it was showing me which block whas the last found broken block, and after comparing it with my<i> fdisk -l</i> output I noticed that it was the /dev/sda3 offset + 2. Holly! This must mean something.<br />
<br />
Let me recap a little here; I have my disk patitioned in this way.<br />
<blockquote class="tr_bq">
<i>/dev/sda1 NTFS<br />/dev/sda2 /boot<br />/dev/sda3 /<br />/dev/sda4 /home</i></blockquote>
I was able to Boot Windows from the NTFS partition, nice.<br />
I was able to mount <i>/dev/sda2 </i>and see the content, nice.<br />
I was not able to mount<i> /dev/sda3</i> nor <i>/dev/sda4</i>, and the broken block was just at the beginning of sda3.<br />
Stuff started to show some trend.<br />
<br />
I tried debugfs to read the block and let the HD firmware to blacklist it but it keeps failing with a weird error:<br />
<blockquote class="tr_bq">
<i>debugfs: open /dev/sda3<br />/dev/sda3: Bad magic number in super-block while opening filesystem</i></blockquote>
What does that mean? Well I had no idea. The forums says that I may want to try fsck, let's do it, it can be that bad. And the good old fsck fails with another weird error:<br />
<blockquote class="tr_bq">
fsck.ext3: Attempt to read block from filesystem resulted in short read while trying to open /dev/sda3<br />Could this be a zero-length partition?</blockquote>
<br />
This should be wrong, I have just<i> fdisk -l</i> it, I know it is a complete partition.<br />
<br />
A <a href="https://www.linuxquestions.org/questions/linux-hardware-18/attempt-to-read-block-from-filesystem-resulted-in-short-read-while-trying-to-811348/">forum post</a> mentioned a tool named "testdisk" as a hopeless guy, I ran to man and look what is that tool. The description says "Scan and repair disk partitions" that sounds useful. After the small man I decided to try it out: <i>testdisk /dev/sda3</i> and Magic! it is able to tell me what was going wrong with my partition, it figured out the copies of the broken superblock and it even tell me the exact command I need to fix my problem:<br />
<blockquote class="tr_bq">
<i>fsck.ext3 -b <blockaddress> -B <blocksize></i>.</blockquote>
<br />
Ran that nice and beautiful command and I suddenly I got my data back!<br />
<br />
<h2>
Conclusion</h2>
What I have learnt here, first of all was:<br />
<br />
<ul>
<li><b>YOU DO MUST HAVE BACKUPS.</b></li>
<li>Testdisk is your friend.</li>
<li>fsck is your friend.</li>
<li>Archlinux ISO is your friend.</li>
<li>*nix is your friend.</li>
<li>Hard drives tend to break.</li>
</ul>
<br />
Also, It is important to notice the wide variety of tools we have to help us get our data back. Not always a small physical breakage is the end of the world, you can recover it if you have the patience to read a ton of forums, man pages and dedicate some time to the adventure.<br />
<br />
I was happy to learn that ext3 have this redundant structure (copies of the superblock all over the fs) to help us to recover from breakage. I love it. I don't know how other fs do the trick but I am really happy I am using ext3.<br />
<br />
Finally, I would like to thank the Archlinux team for give me a really powerful and nice livecd to help me in this painful trip.<br />
<br />
Now I have my system back, I can hear my music and use my files. It is time to setup a redundancy plan to avoid getting panic again due a bad hard drive. But that will be the project for next week.<br />
<br /></div>
Anonymoushttp://www.blogger.com/profile/15102034190185236231noreply@blogger.com2tag:blogger.com,1999:blog-8168579566853597225.post-10227554749762372322012-03-27T11:28:00.000-06:002012-03-27T11:28:21.425-06:00Libpurple in GSoC 2012Libpurple was accepted in the <a href="http://www.google-melange.com/">Google Summer of Code</a> this year 2012.<br />
<br />
I urge every student reading this to apply for any of the projects accepted and if you like, apply to Libpurple.<br />
<br />
We have a set of <a href="http://developer.pidgin.im/wiki/FutureSOCProjects">proposed ideas</a> but you are encouraged to bring your own ideas since they will be fresher and will not compete with other people over the same project.<br />
<br />
You can find libpurple's application page at <a href="http://www.google-melange.com/gsoc/org/google/gsoc2012/pidgin">Pidgin, Finch and libpurple</a>.Anonymoushttp://www.blogger.com/profile/15102034190185236231noreply@blogger.com4tag:blogger.com,1999:blog-8168579566853597225.post-47859541500772960672011-05-11T16:36:00.002-05:002011-06-22T14:49:07.017-05:00Simulating mixed language HDL using VCSI needed to port some modelsim do files to this new simulator so I found out that the documentation available is not as friendly as I would like. Finally got to get the simulation working and I want to archive it somewhere it can help me or someone else in the future.<br />
<br />
This little tutorial is supposed to be dynamically updated when I feel that more info is needed or find errors in it.<br />
<br />
VCS is a simulator from Synopsys which is known to be far superior to Xilinx ISim. It support multiple languajes such as the most popular Verilog, VHDL, SystemVerilog.<br />
<br />
<span class="Apple-style-span" style="font-size: large;">General workflow</span><br />
The general workflow when simulating with VCS consist of the following steps.<br />
<ul><li>Compile/Analize</li>
<li>Elaborate/Build</li>
<li>Simulate</li>
</ul>First, you need to compile each and every HDL files you have in your design including the testbench. This is done with different command lines such as<br />
<ul><li><b>vhdlan</b>: The compiler for VHDL files</li>
<li><b>vlogan</b>: The compiler for Verilog and SystemVerilog files.</li>
</ul>Both commands accept the flag <b>-f filelist</b> where "filelist" is a list of files to be compiled. This help a lot to simplify and structure the compilation scripts.<br />
<br />
<span class="Apple-style-span" style="font-size: large;">VHDL Compilation/Analysis</span><br />
<br />
VHDL uses libraries to organize code, getting vhdlan to compile them is not straight forward since vcs needs to map them to some directory and then link them.<br />
<br />
To achieve this you must create a directory with the name of each the library in your pwd to be able to map the libraries to a physical directory. The way to tell vcs how to map each library to the directories a special file is needed: <b>.synopsys_vss.setup. </b>This file can be on your VCS instalation path, in your $HOME or in your pwd, vhdlan will look for the file in this particular order.<br />
<br />
The syntax of this file is somehow easy, you first need to map the WORK library to a name, which then must be maped to a physical directory, after that, each library must be mapped to a physical directory on each line.<br />
<br />
In the following example, there are two libraries, MY_LIB with some modules of my own and UTIL_LIB which have util modules designed over the time.<br />
<blockquote><span class="Apple-style-span" style="background-color: white; color: #0b5394;">WORK > DEFAULT<br />
DEFAULT : ./work<br />
MY_LIB : ./MY_LIB<br />
UTIL_LIB : ./UTIL_LIB</span></blockquote>This is a simple command line used to compile VHDL files with libraries<br />
<blockquote><span class="Apple-style-span" style="color: #0b5394;">vhdlan -work <library_dir> -f <filename_of_file_list></span></blockquote><br />
<span class="Apple-style-span" style="font-size: large;">Verilog Compilation/Analysis</span><br />
<br />
Verilog doesn't uses libraries so there is not need to do tricks with the libraries. Still it's useful to know some tricks about this complier.<br />
<br />
vlogan have some useful flags that helps to structure the code and maintain isolated the simulation environment to the development one.<br />
<br />
<ul><li><b>+incdir+</b>: Specify the path where vlogan will look for the files to compile.</li>
<li><b>+define+</b>: Define a text macro at compile time.</li>
<li><b>+v2k</b>: Enables the use of Verilog Standard 2001</li>
<li><b>-svlog</b> or <b>-sverilog</b>: Enables the analysis of SystemVerilog code.</li>
</ul><br />
This is the simple command line used to compile Verilog files using 2001 standard and a SystemVerilog test bench.<br />
<blockquote><span class="Apple-style-span" style="color: #0b5394;">vlogan +v2k +incdir<path_to_files> -f <filename_of_file_list><br />
vlogan +v2k -sverilog +incdir<path_to_tb> -f <filename_of_file_list></span></blockquote> vlogan write it's output in a directory AN.DB which can be deleted in a cleanup process to keep workspace clean.<br />
<br />
<span class="Apple-style-span" style="font-size: large;">Elaboration/Build</span><br />
<br />
Once every file needed in a design is compiled, now it is time to elaborate the executable binary. The command to elaborate is <b>vcs</b> which take as parameter the top module to be simulated, usually the top module of the testbench.<br />
<br />
The command to elaborate is:<br />
<blockquote><span class="Apple-style-span" style="color: #0b5394;">vcs -debug_all <top_tb_module_name> glbl</top_tb_module_name></span></blockquote>where the flag <b>-debug_all</b> tell the tool to enable the simulation GUI and the necessary debug information to add breakpoints and line stepping. The <b>glbl</b> argument is needed to use Xilinx components.<br />
<br />
<span class="Apple-style-span" style="font-size: large;">Simulation</span><br />
<br />
The elaboration command generates an executable file with the name of <b>simv</b> which must be executed to start the simulation. The default behavior of this executable is to run and output messages from the test bench to stdout. Normally what is needed is to get a GUI where to see the waves and analyze the signal values at each time, this is done with the <b>-gui</b> parameter.<br />
<br />
The command to execute the simulation with a GUI is:<br />
<blockquote><span class="Apple-style-span" style="color: #0b5394;">./simv -gui</span></blockquote><span class="Apple-style-span" style="font-size: large;">Conclusion</span><br />
<br />
This is the basic workflow needed to simulate a design in VCS, each of the tools have a lot more parameters that can be used to get specialized behavior when needed. All of them come with the documentation of the tools through the manuals or the -h parameter.<br />
<br />
[1] <a href="http://www.vlsiip.com/vcs/">VCS and coverage by Aviral Mittal</a><br />
<i>Update: This is the original link to the article I found useful</i><br />
<i>Update: Fix some escaped out <info> comments.</i>Anonymoushttp://www.blogger.com/profile/15102034190185236231noreply@blogger.com1tag:blogger.com,1999:blog-8168579566853597225.post-25724964081667379082011-03-07T23:43:00.000-06:002011-03-07T23:43:07.447-06:00Split large LaTeX filesWhen your paper/report is getting too large it becomes a little complicated/frustrated to maintain it in one big file.<br />
<br />
It's possible to split a big .tex file and setup a hierarchical file tree with a small portion of the text on each file.This is achieved using the \include directives.<br />
<br />
There are three main LaTeX commands that manage multiple input files.<br />
<br />
<ul><li>\includeonly which specifies a list of files that will be included by the \include command. If this command exists and file in \include is not listed here, it will not be included</li>
<li>\include as it's name says, it include a file in a new page. Used with \includeonly, it can include files selectively. Note: This command can't be nested</li>
<ul><li>It's equivalent to \clearpage \input{file} \clearpage</li>
</ul><li>\input This is the most simple include scheme and it is equivalent tot a plain C's #include</li>
</ul><div>So your big file </div><blockquote>\section{foo}<br />
% lot of text, figures and equations<br />
\section{bar}<br />
%lot of text and subsections></blockquote>can be simplified as<br />
<blockquote>\include{foo}<br />
\include{bar} </blockquote>where there is a foo.tex an bar.tex sub-files containing the section text.<br />
<br />
If you want to get another layer of simplifications, it's possible to just use \input in the sub-files.<br />
<br />
[1] <a href="http://www.kfunigraz.ac.at/~binder/texhelp/ltx-165.html">http://www.kfunigraz.ac.at/~binder/texhelp/ltx-165.html</a>Anonymoushttp://www.blogger.com/profile/15102034190185236231noreply@blogger.com2tag:blogger.com,1999:blog-8168579566853597225.post-42822026084190682152010-11-01T00:01:00.000-06:002010-11-01T00:01:28.328-06:00MSNP16 and SLP-rewrite mergedI have just pushed the revision that merges my MSNP16 and SLP branches to the main development branch in pidgin. I'm very happy to have this branches merged since they represent almost all the code I have been writing on the last year.<br />
<br />
Yes I have started coding MSNP16 support almost a year ago and it took a lot of effort, reverse engineering, debugging Wireshark dumps and a lot of pidgin debug logs to get it working. That is a lot of time!<br />
<br />
It is true that the MSNP16 code was almost complete when I started my SoC work but I though it would be better to start the SLP rewrite over the MSNP16 branch to be able to easily test both codes at the same time and try to get it in a better shape before merging it to i.p.p.<br />
<br />
I know I have announced this merge like two weeks ago, but you know, I wanted this merge to be followed by a reasonable "beta" testing before being released and at that time it got that we had an security issue and needed to release 2.7.4. Once it was out, there were some ICQ issues that needed a quick release to fix that bugs, so we got a 2.7.5. Now I was able to merge and get a normal release cycle to get beta testers to find bugs in this new and nice code.<br />
<br />
I hope this code will fix more issues than it brings up, specially the ones related to data transfer. Since most of the code on this area have changed due DirectConn and SLP-rewrite, I guesss it would be a good idea to review and close most of the related tickets since the traceback and debug output would be really useless now. Yei for smashing tickets!<br />
<br />
I hope you all like 2.7.6 when it get released!Anonymoushttp://www.blogger.com/profile/15102034190185236231noreply@blogger.com9tag:blogger.com,1999:blog-8168579566853597225.post-56041206882474339512010-10-25T20:14:00.000-05:002010-10-25T20:14:57.984-05:00Use ImageMagick to convert a set of imagesWhile reporting my experiment output from LTSpice I need to save the plots I get as wmf (because it's the only supported image format in this software) then change the format to png to be easily loaded in my latex file.<br />
<br />
To achieve this goal I have used ImageMagick. ImageMagick is a really powerful set of tools to manipulate images in command line which allow me to just type a easy command to get my png ready to be used in my tex file.<br />
<br />
I have been using the convert command:<br />
<blockquote>convert foo.wmf foo.png</blockquote><br />
Today I needed to convert a bunch of images so using convert would be a little painful. A quick google show me the answer to my problem: mogrify<br />
<br />
With mogrify command you can change the format for every image you tell it in your shell. So to modify all wmf images in a directory now I just need execute:<br />
<br />
<blockquote>mogrify -format png *.wmf</blockquote><br />
And that's it. I hope some of you find it useful.Anonymoushttp://www.blogger.com/profile/15102034190185236231noreply@blogger.com1tag:blogger.com,1999:blog-8168579566853597225.post-56917533852887765562010-09-21T01:56:00.000-05:002010-09-21T01:56:24.007-05:00Merge Plan for MSNP16 and SLPAs you know I have been working on a refactor for the SLP code on the msn-prpl for libpurple as part of my Summer of Code. Before I started this project I have been working on adding support for "Multiple Points Of Presence" which are part of the MSN Protocol 16 (MSNP16).<br />
<br />
Since by the time I started the SoC the msnp16 code was almost complete, I started to refactor the SLP module over the msnp16 code. At some time in the process of the refactio I got an ugly crash from one of the features of MSNP16, the P2P version 2 which uses a different binary header for SLP transfers than the one used before. This bug was caused by some clients not paying attention to the capabilities we expose, we clearly say we don't support P2P V2. To avoid this crash I have disabled MSNP16 in the SLP branch.<br />
<br />
I have been testing the SLP code for a while with the MSNP16 feature enabled and it looks stable to be merged. The crash has gone, there are some minor changes that must be done, specially UI stuff, the new SLP stack have no known bug.<br />
<br />
I have updated the SLP branch with the latest changes from the MSNP16 branch, so this branch [1] have every work waiting to be merged to pidgin's main development branch. This was the first step to get it merge-ready. There have been some testing form some of our closest geeky-friends and now I think it's ready.<br />
<br />
My plan is to merge the SLP code with the MSNP16 feature enabled the next week to im.pidgin.pidgin. I ask everyone interested in this features to test it before it get to main branch so we can have a softer merging process.<br />
<br />
[1] im.pidgin.soc.2010.msn-tlcAnonymoushttp://www.blogger.com/profile/15102034190185236231noreply@blogger.com4tag:blogger.com,1999:blog-8168579566853597225.post-15185696013793416792010-09-20T03:24:00.000-05:002010-09-20T03:24:53.259-05:00New SLP stack working<i>Note: This post was wrote 2 months ago and never published, my bad.</i><br />
<br />
After some bug squash, I finally got p2p transfers working. I have tested sending and receiving custom emoticons and display pictures, smallers than the SB limit and bigger than that. They just work ok.<br />
<br />
I guess I need to test some file transfers before thinking my work is done, I also have some cleanup that I want to do but I guess I can declare that the new stack is working nice and is very close to be merge ready. I'm really glad to say so.Anonymoushttp://www.blogger.com/profile/15102034190185236231noreply@blogger.com1tag:blogger.com,1999:blog-8168579566853597225.post-147461672003203832010-08-04T12:33:00.000-05:002010-08-04T12:33:57.948-05:00Implementing new architecture in msn-prpl<b>Split SLP modules:</b><br />
<div><b><span class="Apple-style-span" style="font-weight: normal;">The first thing needed was to move SLP code to where it belongs. </span></b></div><div><b><span class="Apple-style-span" style="font-weight: normal;"><br />
</span></b></div><div><b><span class="Apple-style-span" style="font-weight: normal;">Every high level calls like requesting a msn transfer, an user display and stuff to the upper layer of the stack (slp.[ch]) this was easy because most of the SLP code was there so the file needed a cleanup.</span></b></div><div><b><span class="Apple-style-span" style="font-weight: normal;"><br />
</span></b></div><div><b><span class="Apple-style-span" style="font-weight: normal;">Every SLP protocol decode and management, like sending acks, an 200OK after the invite and stuff, to the SlpCall module, most of that code was on slp.c and some other on slplink.</span></b></div><div><b><span class="Apple-style-span" style="font-weight: normal;"><br />
</span></b></div><div><b><span class="Apple-style-span" style="font-weight: normal;">Every interaction with link layers, splitting the SlpMessage in Parts in SlpLink module.</span></b></div><div><b><span class="Apple-style-span" style="font-weight: normal;"><br />
</span></b></div><div><b><span class="Apple-style-span" style="font-weight: normal;">Most of this changes were mostly cut-paste changes, still a lot of functions needed to be exported as public so the other modules can use them, it was not so painful.</span></b></div><div><b><span class="Apple-style-span" style="font-weight: normal;"><br />
</span></b></div><div><b><span class="Apple-style-span" style="font-weight: normal;"><b>Get MsnMessage out of here!</b></span></b></div><div><b><span class="Apple-style-span" style="font-weight: normal;"><b></b>Since the code was designed to be used to transfer data using the Switchboard, all the code was populated with MsnMessages which is a representation of the MSG SB command.that is used to send the SlpMessages to the SB server. Since I wanted to get this stack agnostic of link layer it was needed to get the MsnMessage out of the way. It was hard since this object was used as the core of some of the functionality, most of the cases it was replaced by a SlpMessage.</span></b></div><div><b><span class="Apple-style-span" style="font-weight: normal;"><br />
</span></b></div><div><b><span class="Apple-style-span" style="font-weight: normal;">This change was really big because it imply get away every SB code from the SLP stack and abstract it in some way that can be used any of the link layers.</span></b></div>Anonymoushttp://www.blogger.com/profile/15102034190185236231noreply@blogger.com0tag:blogger.com,1999:blog-8168579566853597225.post-78842601032344833942010-08-04T00:00:00.000-05:002010-08-04T00:00:25.226-05:00Msn-prpl Slp RefactorI have been chosen to refactor the SLP module of the MSN-prpl as my Google Summer of Code work. This module have not been touched since ages, if you read it closely it's clear that it's a hack over a hack with some primitive and old structure mostly lost by the hacks that were added over the years.<br />
<br />
As the first part of my work I have somehow re-designed the SLP stack as shown in the <a href="http://developer.pidgin.im/wiki/SlpArchitecture">SlpArchitecture</a> page, this stack is mostly based on the original structure. I will not elaborate a lot here about the modules because that documentation must be done on the wiki page. ;-) Still I want to explain a little the major changes I did to the original stack.<br />
<br />
Msn have different ways to do p2p, libpurple now support both, Switchboard and DirectConnection. The first one is the one that have been always supported where we send the binary data to the Switchboard Server and it redirect it to the buddy client. The recently added DirectConnection support does open a socket on both clients so they can communicate "directly" so the transfer is really faster this way.<br />
<br />
To be able to use the same SLP code to manage both types of p2p I have added a Conn layer which aims to abstract the Switchboard(SB) or DirectConn(DC) link. They both manages just the SlpMessageParts and send them outside.<br />
<br />
Both SB and DC have some data size restriction, when the binary data that needs to be sent is bigger than the link limit, they must be split in different messages. This is the reason of the creation of the SlpMessageParts which is the fragmented representation of a SlpMessage that can be sent to the link layer down to the stack allowing the upper modules to manage a SlpMessage just as single message without noticing if it's big or small, that is abstracting the need to split messages to the upper layers on the stack.<br />
<br />
This is basically why I have done the changes in the architecture, I hope this new architecture will be extensible and easy to maintain.Anonymoushttp://www.blogger.com/profile/15102034190185236231noreply@blogger.com0tag:blogger.com,1999:blog-8168579566853597225.post-37228302981809758542010-06-07T03:44:00.000-05:002010-06-07T03:44:50.435-05:00Stream Divx/xvid from linux to your xbox 360For everyone of us who use ushare to stream video from our linux box to the xbox 360 it is common to see an annoying message telling us that the file could not be played because a bad codec.<br />
<br />
I have lived with this for a while but today I was not going to accept MS interfere with my just downloaded torrent. So I researched a little and found a solution that actually works. It can be hacky because you need to change your mime info and maybe it will break something.<br />
<br />
The problem is related mime info since it looks like ushare doesn't know what to do with "video/x-msvideo" which is related to avi files in the /usr/share/mime/packages/freedesktop.org.xml mime file. Having noticed this, and knowing that "video/x-ms-wmv" is supported by ushare it is easy to change the unssuported msvideo mime to ms-wmv in the mime file. After editing any file in /usr/share/mime/packages/ you will need to `update-mime-database /usr/share/mime/`to apply your changes before they take effect.<br />
<br />
Doing this, ushare send the avi file just as any other and, since an old update, the xbox is able to decode it and play it for you.<br />
<br />
Problem: I do not feel ok hacking over freedesktop.xml mime file, I'm not sure if there is another way to tell mime that .avi is an video/x-ms-wmv or maybe video/x-divx.<br />
<br />
I think I will need to look for a better way to get it working but that will need to be later since today, I will procrastinate in favor of my now-possible-to-play-on-my-xbox just downloaded video.<br />
<br />
*I found all this info here: <a href="https://lists.ubuntu.com/archives/ubuntu-us-nm/2007-December/000368.html">https://lists.ubuntu.com/archives/ubuntu-us-nm/2007-December/000368.html</a><br />
(yeah, I never expected to find something useful in ubuntu archives)Anonymoushttp://www.blogger.com/profile/15102034190185236231noreply@blogger.com0