DREAMWEAVER Y U NO

Why I F@#king hate WYSIWYG code editors

0

UPDATE: After a little rooting around through the project, I found out why there is all the HTML entities in the first screenshot. This still doesn’t subtract from the fact that I try to convey in this post.

I’m working on a website project at the moment originally created with Dreamweaver switching between Layout and Code-editing modes(no idea which creative suite version, but that doesn’t matter because the WYSIWYG editor in all versions suck.) I cannot help but facepalm every time I open a new PHP file that had previously been opened in Layout Mode. It’s almost like torture, like taking a rusty spoon and gouging your eyes out.

DREAMWEAVER Y U MASSACRE CODE?

For those of you familiar with what I’m getting at, read no further as you have previously and/or are constantly bothered by it and completely understand what I’m getting at. Let me just give you an example of what I’m talking about…

Bad HTML Code

Yeah, its THAT bad. What the hell is with all the converted HTML entities and non-breaking spaces? Why is the formatting not wrapped/structured? WHY DOES NOTHING IN THIS WORLD MAKE SENSE?! Now take a look at the clean and formatted code I output by NOT using Layout Mode and instead using strictly a text/code-editor Sublime Text 2 (which I will cover in another blog post on why I f#$king LOVE Sublime Text 2):

Same giant chunk of text and HTML with a few alterations. Pretty ain’t it? Now, all this isn’t to say I have the cleanest HTML code on the planet, but at least mine is far more manageable. I know that HTML technically isn’t “code” but rather markup but my point still stands. I have nothing against the programmer/coder who originally created the file, but all my hatred goes towards the WYSIWYG editor that completely destroyed any of the original formatting. Formatting of scripts/HTML files is my biggest pet peeve and I will go to great lengths to drive home how important formatting is.

So my question to anyone out there is: What makes you cringe and question your sanity about the quirks of programming (in any context, formatting, data type handling, etc.)?

Setting up CIITIX-WiFi

0

In my employment, the current method of wireless network authentication is solely based on MAC address filtering by using a white-list of known MAC addresses in the workplace to ensure only pre-approved users are able to access network resources. There are a few pros to using this system, but the cons outweigh them. Simply put, security and ease-of-use for administrators is key. MAC addresses can easily be spoofed by experienced clients connecting to the network, and the current system entails that a spreadsheet, accessed in a network folder, be updated every time a new user wants to join the network, or wants to connect to another wireless router. Because this system has its flaws, I thought it was time to move to a more easy-to-manage system that could be integrated into the current infrastructure. One of the suggestions to my employer was to implement a RADIUS server authenticating from a MySQL database. Simply put, they liked my idea and put me to the task of implementing it and making it work.

The criteria I sought out to fullfil was such:

  • The system needed to be easily integrated into the current infrastructure
  • It needed to be cheap, or even better, free
  • It needed to be easy to manage
  • If I were terminated today, the next employee in my position could administer the server
  • And for my own personal gain: it needed to be fun to implement and learn!

The system that I chose to use, after much diligent research, and failing to compile FreeRadius from source with Daloradius as the web-based frontend administrative console and MySQL as the database back-end, is a turn-key hotspot and wireless network gateway system built on Debian Linux called CIITIX-WiFi.

From the CIITIX-WiFi page:

It is a turnkey solution to your WiFi hotspot needs.
Built onto the rock solid stable debian linux, setting up a secure (TTLS) WiFi hotspot is just a minute away.

As mentioned it has been customized (patched with ssl support). For ease of management through GUI, a light weight Joe’s Windows Manager (JWM) has been setup onto which, ice-weasel has been added for GUI based configuration of “Access Points” (NAS devices) & the users. The distribution is a light weight (~340 MB) live-CD coupled with installation support onto persistent media.

Downloading and Installing CIITIX-Wifi

After running through an installation and testing with some fake user accounts, I was able to setup a RADIUS-MySQL network authentication system within 20 minutes, from start to finish (excluding the time to download the ISO and time to learn the procedure to install and configure).

Download the ISO from the Sourceforge project page as the CIITIX-WiFi webpage has a slow mirror. Once downloaded, burn to a CD with your favourite ISO burning program (in my case the built-in Windows 7 burning tool). I will not cover how to do this here, as if you are setting up such a system as this, you should at least know how to burn an ISO.

Insert the CD into your destined machine in which will serve as the server, and begin the install (you may have to adjust the boot order of your machine to boot from disc). Once in the installation, pick the first “GUI install” option and hit Next. From here, select your language to use, your region, the hostname to use, your timezone, and when it asks how you want the hard disk setup, select “Guided ‐ use entire disk”. After selecting the type of disk and partition layout, accept the configuration and relax while CIITIX-Wifi installs.

Once installed, the installer will ask you to input a Root account password. It is highly recommended to use a strong password with numbers, letters and some special characters. There are many password generators on the Internet, and one that I frequent is PCTools Random Password generator. I created a 14-digit password which includes letters (e.g. abcdef), mixed case (e.g. AbcDEf), numbers (e.g. a9b8c7d), and punctuation/special (e.g. a!b*c_d). After the Root account is setup, a user account needs to be setup. Again, the password should be strong as you don’t want any unauthorized users in the system.

You will want to install GRUB to the hard disk, then continue so that the installation can complete and the system will restart. Once the system has rebooted, login using the user credentials input during the setup. You will be presented with a simple GUI with a taskbar once you are logged in.

Configuration

Now we want to find out the IP address of the system for use in setting up the Access Point to connect to the RADIUS server. Click on the JWM button in the bottom left corner and select Terminal. Find out the IP address by typing ifconfig, If that doesn’t work, type ip addr. I won’t go into details on why ip addr works and ifconfig doesn’t on some systems because it is a discussion on deprecation of certain terminal commands which is not covered here.

We also want to set up remote administration of daloRADIUS, as the default setup is for localhost only. In Terminal, elevate to the Root account. Once in an elevated shell, edit the /etc/apache2/apache2.conf file so that daloRADIUS can be accessed from outside of the local machine. This appears on lines 290 and 291 where it states that only Localhost may access the webpage based on the “allow, deny” rules which explicitly allow Localhost and 127.0.0.1. By adding in “Allow from all” and either deleting or commenting out the Localhost entries, this gives us the ability to administer the server remotely through a web browser.

$ su -
$ vim /etc/apache2/apache2.conf

In the /etc/apache2/apache2.conf, change the daloRADIUS virtual host entry from this

# apache2.conf
<directory "/var/www/">
Options -Indexes
</directory>
Alias /dalo "/var/www/daloradius/"
<directory "/var/www/daloradius/">
        Options Indexes MultiViews FollowSymLinks
        Order allow,deny
        Allow from 127.0.0.1
        Allow from localhost
</directory>

to this

# apache2.conf
<directory "/var/www/">
Options -Indexes
</directory>
Alias /dalo "/var/www/daloradius/"
<directory "/var/www/daloradius/">
        Options Indexes MultiViews FollowSymLinks
        Order allow,deny
        Allow from all
</directory>

NOTE: If you try to use thesudo command here to elevate your system command, it will fail and tell you that you are not in the sudoers file. By elevating yourself before you enter in a command, you are essentially opening yourself up to the whole system, so be VERY CAREFUL when running ANYTHING as the Root account. Standard practices state that no user should run as Root and that users should find ways to execute system commands without elevation.

Now that the server is able to be administered remotely, I switched to my Macbook, and continued the rest of the configuration there.

Resources & Links

  • CIITIX-WiFi, the turn-key hotspot and wireless network gateway system, built on Debian Linux – http://ciitix.ciit.net.pk/index.php/ciitix-wifi
  • The CIITIX-WiFi documentation and usage-guide page where basic installation instructions can be found – http://ciitix.ciit.net.pk/index.php/ciitix-wifi-usage-guide
  • CIITIX-WiFi utilizes this technology to authenticate users from a MySQL database – http://freeradius.org/
  • The web-based frontend to administer FreeRadius, daloRADIUS – http://daloradius.com/
  • Secure PC Tools Password generator – https://secure.pctools.com/guides/password/

Asus Transformer and PS3 controller living in harmony

Control Transformer TF101 Android 3.2.1 tablet using PlayStation 3 controller

0

Android Honeycomb 3.2.1

With the release of Honeycomb 3.2, it brought support for many USB gamepad devices, including the Xbox 360 and PlayStation 3 controllers. Sadly, bluetooth gamepads are in very short supply and the current IME drivers included in Honeycomb aren’t compatible with a whole whack of wireless devices. This is where the hacking/modding community comes into play: Dancing Pixel Studios released a bluetooth PlayStation 3 driver in the form of an app for $1.66 in the Android Market that is compatible with a large host of tablets and Android versions including Honeycomb 3.2 and 2.3.

**DISCLAIMER**: That being said, the application DOES require your device to be rooted, so it is not for the faint of heart. I do not condone rooting your device unless you are completely aware of the risks involved including bricking your device and rendering it a useless hunk of technology, as well as voiding your manufacturers warranty.

Asus Transformer TF101 tablet

The tablet that I tested all of these steps on is the Asus Transformer TF101 running the latest update for Android, 3.2.1, from Asus. I have rooted it with the one-click-root from rebound821 in the XDA Developers forums. I won’t be outlining the rooting process here as the link takes you to a very simple “one-click” procedure.

Getting the tablet and the controller to talk nicely

Because the actual bluetooth driver app is a paid app at $1.66, the developers at DPS have put out a special free compatibility checker app to ensure that your bluetooth controller works with your device, as well as display your local bluetooth address for your tablet to change on your controller. When you purchase a bluetooth Sixaxis DualShock3 PlayStation 3 controller, the MAC address for the devices is not set properly for this procedure and will pair with any PS3 you connect it to. To make ensure we connect the controller to our tablet, you have to change its master bluetooth MAC address.

On your PC/Mac/Linux computer, you have to download a tool for your respective OS from Dancing Pixel Studios website and run it. Using your local bluetooth address for your tablet, enter it into the application and bingo! You have now changed your controllers bluetooth MAC address!

In my testing, I used OS X 10.7.1 on my 2010 MacBook Pro. I downloaded the sixpair command line utility, made it executable on my system, and ran it to change the bluetooth MAC address. Use your tablets local MAC address for this as 00:11:22:33:44:55:66 is DEFINITELY not the same as yours. Be sure to back up your current controllers MAC address in a text file somewhere in case you want to use it again for its intended purpose on its designed console.

$ chmod +x sixpair
$ ./sixpair 00:11:22:33:44:55:66

A quick run through of images checking MAC address on controller and changing it.
Sixaxis Compatibility Checker

Loading Sixaxis Compatibility Checker

After clicking Start which starts the specialized bluetooth driver, the app confirms that you indeed want to continue.

Running Sixaxis Compatibility Checker

If all goes well, you will see the below image saying your devices is compatible with your Android device

Supported Device

Once it is confirmed that your controller is deemed ready to be paired with your tablet, you need to download the full version of the Sixaxis Controller app for $1.66 and begin the process of mapping the appropriate buttons to the right keys. The procedure to pair your controller is much the same as testing your tablet for device compatibility, only this time you will hold the PlayStation Home button on the controller until the lights begin to flash, then the number one light goes solid. On your tablet screen you should see “Client connected: 1″. If so, congratulations, you gave successfully paired your PS3 controller with your tablet.

Sixaxis Controller App

Key Mapping in N64oid

Out of the box, the Sixaxis Controller app has most of the controllers keys mapped. The DPAD is set to go, as well as the Select and Start buttons. The Analog sticks, Triangle, Circle, X and Square buttons however need some tweaking in order to work. I found a very good resource in the XDA Developers forums in a post by the user isolated_epidemic. They outline how the Analog stick should properly be mapped for the C-buttons in the N64 emulator N64oid, but for the most part the key mappings are pretty straight forward.

The Machine

0

So there has been a project I’ve been wanting to start for quite awhile now which was spawned from an idea when I was in university at VIU. It started out as this gigantic full-fledged webapp with all the bells and whistles, but as I started to map the pseudo-code out in my head, then on paper, I began to realize how much of a task it was going to be and how much time it would take to code, test, and fix bugs, and I came to the conclusion that it would stay an idea. Well, this past Sunday my idea has come to fruition: may I present, The Machine.

Its a very simple concept right now, where anyone can anonymously post anything on a public “wall” of sorts. I plan on actively working on it as there are many technologies I wish to incorporate and eventually make it a full fledged web-app. Now, many of you might question this: Why do I need another place on the internet to post stuff which will probably never go viral? Well, it depends on what you make of it really.

I will be constantly working to improve The Machine in my spare time, as well as keeping up with my technology blogging with my experiences in my IT career. It make go through some name changes, but for now it will remain “The Machine” (short for The Content Machine, catchy I know right? :) )

Distributed File Systems

0

This assignment involved researching and implementing a few different Linux file systems that allow users to mount a remote filesystem and work directly with files that are physically stored on a different machine. To provide some context to the assignment, imagine you are setting up the network for a small web development company where users are constantly sharing resources on different servers. Rather than constantly duplicating this data (e.g. a shared database of images used by the graphics designers), we are looking for a FAST network filesystem to allow users to read and write files on remote servers.

Before starting this assignment, anyone who attempts this should be familiar with the process of ‘mounting’ filesystems, editing the fstab, adding a new physical disk to a machine, formatting it with a Linux filesystem and permanently mounting it into the local filesystem etc…

To test the speeds, there are many different methods. Here is a link to a script that gives the megabytes per second:

To have a uniform file to test with, I used an Arch Linux ISO.

I implemented and compared the following technologies:

  • sshfs
  • samba
  • nfs
  • glusterfs

Hardware requirements

  • Client machine running Ubuntu Desktop 10.10
  • Server machine running Ubuntu Linux Server 10.04 LTS
  • Server machine running Windows Server 2008 R2

Hardware Specs

Ubuntu 10.10 x64 Desktop – Client

  • 1 CPU
  • 1024MB RAM
  • 80GB HDD
  • Bridged Network Adapter

Ubuntu 10.04 x64 Server

  • 1 CPU
  • 1024MB RAM
  • 80GB HDD
  • Bridged Network Adapter

Windows Server 2008 x64

  • 1 CPU
  • 2048MB RAM
  • 80GB HDD
  • Bridged Network Adapter

Setting up ‘passwordless’ ssh login

On the Ubuntu client machine, I created an SSH key with the ssh-keygen utility specifying where to put the outputted RSA key, then ran ssh-copy-id to copy the public key to the server that I wanted to mounted drive. Because ssh-copy-id can sometimes input multiple keys, I ssh’d into the server to ensure that there were no duplicate keys that could cause any problems.

ssh-keygen -f ~/.ssh/id_rsa

One thing to be careful of here, as I realized after a few tries, is that you can’t run as sudo or as root user, as it will make the key from the root account, therefore making root able to ssh without a password and not the account intended.

ssh-copy-id -i ~/.ssh/id_rsa.pub brandon@192.168.1.120
ssh brandon@192.168.1.120

Because I configured it correctly, a password was never asked again to ssh to the remote server anymore.

sshfs

I first installed SSHFS using the apt-get utility, then made a new folder to be the sshfs mountpoint

sudo apt-get install sshfs
mkdir ~/brandonRemote

After installing sshfs on the Ubuntu client I connected a single time using this command to ensure that it worked:

sshfs brandon@192.168.1.120:/home/brandon ~/brandonRemote

This is the line that I entered at the bottom of my /etc/fstab file:

sshfs#brandon@192.168.1.120:/home/brandon ~/brandonRemote fuse user,noauto 0 0

In order to mount the drive, I ran

mount ~/brandonRemote

and because I don’t want to run sudo commands, I ran the following command to unmount the drive:

fusermount -u ~/remoteBrandon

Setting up sshfs to auto-mount on boot was somewhat of a cumbersome task, and I accomplished it by creating a new launcher on the desktop and set the name as “Mount brandonRemote”, set the command as “mount /home/brandon/remoteBrandon” and set the type as “Application in Terminal”. When I double click this launcher once logged in, it mounts my remote sshfs share and allows me to access it.

smbfs

Many of us that have ‘large collections of backed up DVDs’ will probably have implemented this before. Basically, I install smbfs and tools such as smbmount to connect to a windows share with data on it.

In the Windows Server 2008 R2 machine, I installed the File Services role in order to have proper file-sharing capabilities. Going through the setup I ensured that the Distributed File Systems box was ticked with DFS Namespace and DFS Replication also checked off. I chose to create a DFS namespace later, and clicked continue. Once installed, I went to File Services in the Server Manager and provisioned a new share with the Share and Storage Manager. I created a new folder in C: called “Share” and changed the permissions on the folder to “Administrators have full access; All other users have Read and Write”. Once the share was provisioned, I clicked close.

Because I wanted this setup to be a bit more secure than connecting to the Administrator account on the Windows machine, I created a new user named brandon in Local Users and Groups and set a password. This is the user I would use to authenticate for mounting the smbfs share.

On the Ubuntu cliente machine I installed the smbfs filesystem and created a mount point folder:

sudo apt-get install smbfs
sudo mkdir /media/smbfsMount

Because smbfs is a FUSE filesystem, it does not require root to mount drives though depending on how the share permissions on the Windows machine are set a credentials file is required. First I created the .smbcredentials file and filled it with the content required, then made an entry into the fstab:

sudo vim ~/.smbcredentials
# .smbcredentials entry
username=brandon
password=PASSWORDHERE
sudo chmod 600 ~/.smbcredentials
# fstab entry
//192.168.1.119/Share /media/smbfsMount cifs credentials=/home/brandon/.smbcredentials,iocharset=utf8,file_mode=0777,dir_mode=0777 0 0

After a quick test to make sure that the share was mountable and a reboot to ensure that it mounted on startup, the smbfs method was completed.

mount -a
sudo reboot

NFS

NFS is perhaps best for more ‘permanent’ network mounted directories such as /home directories or regularly accessed shared resources. If you want a network share that guest users can easily connect to, Samba is more suited. This is because tools exist more readily across operating systems to temporarily mount and detach from Samba shares.

-Ubuntu Wiki

On the Ubuntu server machine, I installed the NFS kernel server package. In the case of this assignment, I created a folder and share in /files folder and added this to the /etc/exportsfile. The parameters used are to allow users to read and write to the share from the 192.168.1.1 to 192.168.1.255 network. After editing the file, I restarted the NFS server.

sudo apt-get install nfs-kernel-server
sudo mkdir /files
sudo vim /etc/exports
# /etc/exports entry
/files 192.168.1.0/24(rw,no_root_squash,async)
sudo service nfs-kernel-server restart

After making any changes to the exports file, this command must be run:

sudo exportfs -a

On the Ubuntu client I installed the NFS common package which contains the NFS client. Once that was installed, I ran the showmount utility to ensure that the NFS server is working and that entries in the exports file are viewable. Because the /files share showed up, I mounted it and made sure that it mounted correctly.

showmount -e 192.168.1.120
sudo mount 192.168.1.120:/files ~/nfsMount

From here, I created an entry in the /etc/fstab file to mount this share on boot then rebooted:

192.168.1.120:/files /home/brandon/nfsMount nfs rsize=8192,wsize=8192,timeo=14,intr 0 0

Because NFS is a widely used network filesystem such as SSHFS and Samba, there are ways to improves NFS’ performance. One of said ways is to change the block size to improve network performance. Not all environments need to use the default blocksize of 4096 in NFSv4, so changing it to 8192 is the more preferred solution. Testing to see which block size is more efficient can be accomplished by running the following commands:

time dd if=/dev/zero of=/mnt/home/testfile bs=16k count=16384

where this creates a 256Mb test file of zeroed bytes to be transferred over the network which is timed for MB/s, time in seconds and the system resources used to transfer said file.

After several tests at a 16k block size, I averaged out the numbers and ended up with

268435456 bytes (268 MB) copied, 23.661 s, 11.3 MB/s
0.00user 0.82system 0:23.73elapsed 3%CPU (0avgtext+0avgdata 3232maxresident)k
0inputs+524288outputs (0major+251minor)pagefaults 0swaps
268435456 bytes (268 MB) copied, 24.1047 s, 11.1 MB/s
0.00user 1.24system 0:24.15elapsed 5%CPU (0avgtext+0avgdata 3200maxresident)k
0inputs+524288outputs (0major+249minor)pagefaults 0swaps
268435456 bytes (268 MB) copied, 23.866 s, 11.2 MB/s
0.00user 1.22system 0:23.91elapsed 5%CPU (0avgtext+0avgdata 3216maxresident)k
0inputs+524288outputs (0major+250minor)pagefaults 0swaps
268435456 bytes (268 MB) copied, 23.8043 s, 11.3 MB/s
0.00user 0.71system 0:23.84elapsed 2%CPU (0avgtext+0avgdata 3216maxresident)k
0inputs+524288outputs (0major+251minor)pagefaults 0swaps
268435456 bytes (268 MB) copied, 23.849 s, 11.3 MB/s
0.00user 1.29system 0:23.89elapsed 5%CPU (0avgtext+0avgdata 3216maxresident)k
0inputs+524288outputs (0major+250minor)pagefaults 0swaps
0.00user 0.93system 0:23.75elapsed 3%CPU (0avgtext+0avgdata 3216maxresident)k
0inputs+524288outputs (0major+250minor)pagefaults 0swaps
268435456 bytes (268 MB) copied, 23.6705 s, 11.3 MB/s
0.00user 0.84system 0:23.71elapsed 3%CPU (0avgtext+0avgdata 3232maxresident)k
0inputs+524288outputs (0major+251minor)pagefaults 0swaps

glusterfs

GlusterFS is a clustered file system that is capable of scaling several petabytes. It allows you to combine several storage bricks and combine them into one large parallel network file system.

I started by installing glusterfs on the server where my shares would sit

aptitude install glusterfs-server

Then I created a few directories to use with glusterfs

mkdir /data/
mkdir /data/export
mkdir /data/export-ns

Before modifying anything, I made a copy of the original /etc/glusterfs/glusterfsd.vol configuration file which defines which directories will be exported and what client is allowed to connect.

sudo cp /etc/glusterfs/glusterfsd.vol /etc/glusterfs/glusterfsd.vol_orig
sudo cat /dev/null > /etc/glusterfs/glusterfsd.vol
sudo vim /etc/glusterfs/glusterfsd.vol

In the configuration file, I edited it to read

volume posix
  type storage/posix
  option directory /data/export
end-volume
volume locks
  type features/locks
  option mandatory-locks on
  subvolumes posix
end-volume
volume brick
  type performance/io-threads
  option thread-count 8
  subvolumes locks
end-volume
volume server
  type protocol/server
  option transport-type tcp
  option auth.addr.brick.allow 192.168.0.101 # Edit and add list of allowed clients comma separated IP addrs(names) here
  subvolumes brick
end-volume

This configuration basically reads

On the Ubuntu client, I did the same routine, downloading and configuring glusterfs, but adding a second package in apt-get specifically for the client

sudo apt-get install glusterfs-client glusterfs-server

Then I created the mount point for glusterfs

mkdir ~/glusterfsMount

Next I edited the configuration file /etc/glusterfs/glusterfs.vol, but not before making a backup of it

sudp cp /etc/glusterfs/glusterfs.vol /etc/glusterfs/glusterfs.vol_orig
sudo cat /dev/null > /etc/glusterfs/glusterfs.vol
sudo vim /etc/glusterfs/glusterfs.vol

And insert the following entries

volume remote
  type protocol/client
  option transport-type tcp
  option remote-host 192.168.1.120
  option remote-subvolume brick
end-volume
volume writebehind
  type performance/write-behind
  option window-size 4MB
  subvolumes remote
end-volume
volume cache
  type performance/io-cache
  option cache-size 512MB
  subvolumes writebehind
end-volume

Testing to mount the share is just like mounting any other share, except the type of the share must be explicitly set to glusterfs. After mounting, you can ensure that it is dont correctly by checking which drives are connected to the computer with the mount and the df -h commands

sudo mount -t glusterfs /etc/glusterfs/glusterfsd.vol ~/glusterMount
mount
df -h

In order for glusterfs to auto-mount at boot, I edited a line in the bottom of the ,em>/etc/fstab file and added the glusterfs share, then rebooted to make sure it auto-mounted. After booting again, running df -h and mount is a really good idea.

/etc/glusterfs/glusterfs.vol /home/brandon/glusterMount glusterfs defaults 0 0
reboot
mount
df -h

Timing network fransfers

Because every network is different, its always nice to have a benchmark of what exactly the one you are on can do. An easy way to do this is to transfer a large file across the network and time how fast it takes to get to its destination and how many resources it took to do so. An easy way to do this on Linux/Unix systems is by creating a bash script that does the timing for you. I used an Ubuntu Linux ISO as my transfer file to test with. A script that I found useful was

Resources

hashomatic

MD5 Hash-o-matic

0

I have decided to create a small utility to use the MD5 encryption function to encrypt data entered in through a text field, and output in another. It is great for comparing text against MD5′ed text. Check it out over at md5.brandonb.ca!

Go to Top